From noreply at r-forge.r-project.org Thu Aug 1 11:13:54 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 1 Aug 2013 11:13:54 +0200 (CEST) Subject: [Returnanalytics-commits] r2691 - in pkg/Meucci: R demo man Message-ID: <20130801091354.43541184D99@r-forge.r-project.org> Author: xavierv Date: 2013-08-01 11:13:53 +0200 (Thu, 01 Aug 2013) New Revision: 2691 Added: pkg/Meucci/demo/S_FactorResidualCorrelation.R pkg/Meucci/demo/S_SwapPca2Dim.R Modified: pkg/Meucci/R/GenerateUniformDrawsOnUnitSphere.R pkg/Meucci/R/MvnRnd.R pkg/Meucci/man/GenerateUniformDrawsOnUnitSphere.Rd Log: - added S_FactoriResidualCorrelation and S_SwapPca2Dim demo scripts from chapter 3 Modified: pkg/Meucci/R/GenerateUniformDrawsOnUnitSphere.R =================================================================== --- pkg/Meucci/R/GenerateUniformDrawsOnUnitSphere.R 2013-07-31 20:51:16 UTC (rev 2690) +++ pkg/Meucci/R/GenerateUniformDrawsOnUnitSphere.R 2013-08-01 09:13:53 UTC (rev 2691) @@ -7,9 +7,9 @@ #' @return X : [matrix] (T x N) of draws #' #'@note -#' \item{ Initial script by Xiaoyu Wang - Dec 2006} -#' \item{ We decompose X=U*R, where U is a uniform distribution on unit sphere and -# R is a distribution on (0,1) proportional to r^(Dims-1), i.e. the area of surface of radius r } +#' Initial script by Xiaoyu Wang - Dec 2006 +#' We decompose X=U*R, where U is a uniform distribution on unit sphere and +# R is a distribution on (0,1) proportional to r^(Dims-1), i.e. the area of surface of radius r #' #' @references #' \url{http://symmys.com/node/170} Modified: pkg/Meucci/R/MvnRnd.R =================================================================== --- pkg/Meucci/R/MvnRnd.R 2013-07-31 20:51:16 UTC (rev 2690) +++ pkg/Meucci/R/MvnRnd.R 2013-08-01 09:13:53 UTC (rev 2691) @@ -1,5 +1,3 @@ -if ( !require( "QZ" ) ) stop("QZ package installation required for this script") - #' Generate normal simulations whose sample moments match the population moments, #' as described in A. Meucci, "Risk and Asset Allocation", Springer, 2005. #' @@ -18,6 +16,7 @@ MvnRnd = function( M, S, J ) { + if ( !require( "QZ" ) ) stop("QZ package installation required for this script") N = length(M); # generate antithetic variables (mean = 0) Added: pkg/Meucci/demo/S_FactorResidualCorrelation.R =================================================================== --- pkg/Meucci/demo/S_FactorResidualCorrelation.R (rev 0) +++ pkg/Meucci/demo/S_FactorResidualCorrelation.R 2013-08-01 09:13:53 UTC (rev 2691) @@ -0,0 +1,40 @@ +#' This script illustrates exogenous loadings and endogenous factors the true analytical VaR under the lognormal +#' assumptions from the estimation interval to the investment horizon, as described in A. Meucci, +#' "Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_FactorResidualCorrelation.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Input parameters: +N = 4; # market size +nSim = 10000; +mu = 0.1 + 0.3 * runif(N); +sigma = 0.5 * mu; +dd = matrix(rnorm( N*N ), N, N ); +Corr = cov2cor( dd %*% t( dd ) ); +Sigma = diag( sigma, length(sigma) ) %*% Corr %*% diag( sigma, length(sigma) ); + +################################################################################################################## +### Generate simulations for X +X = MvnRnd(mu, Sigma, nSim); + +################################################################################################################## +### Generate a random vector beta +beta = matrix(1, N ) + rnorm(N) * 0.1; + +################################################################################################################## +### Compute factor realization by cross-sectional regression and residuals +F = ( X %*% beta ) / ( t( beta ) %*% (beta) )[1]; +F = solve( t( beta ) %*% beta)[1] * ( X %*% beta ); + +# compute residual +U = X - F %*% t(beta); + +# correlation of residuals U among themselves and with factors F +R = cor( cbind( F, U ) ); +print(R); + Added: pkg/Meucci/demo/S_SwapPca2Dim.R =================================================================== --- pkg/Meucci/demo/S_SwapPca2Dim.R (rev 0) +++ pkg/Meucci/demo/S_SwapPca2Dim.R 2013-08-01 09:13:53 UTC (rev 2691) @@ -0,0 +1,77 @@ +#' This script performs the principal component analysis of a simplified two-point swap curve. +#' it computes and plots, among others, +#' 1. the invariants, namely rate changes +#' 2. the location-dispersion ellipsoid of rates along with the 2-d location-dispersion ellipsoid +#' 3. the effect on the curve of the two uncorrelated principal factors +#' Described in A. Meucci, "Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://} +#' See Meucci's script for "S_AutocorrelatedProcess.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +################################################################################################################## +### Load data +load( "../data/swap2y4y.mat" ); + +################################################################################################################## +### Current curve +Current_Curve = swap2y4y$Rates[ nrow( swap2y4y$Rates ), ]; +dev.new(); +plot(c( 2, 4 ), Current_Curve, type = "l", main = "Current_Curve", xlab = "time to maturity, years", ylab = "par swap rate, #" ); + +################################################################################################################## +### Determine weekly invariants (changes in rates) +Keep = seq( 1, length(swap2y4y$Dates), 5 ); + +Rates = swap2y4y$Rates[ Keep, ]; +X = Rates[ -1, ] - Rates[ -nrow( Rates ), ]; + +Dates = swap2y4y$Dates[ Keep ]; + +PerformIidAnalysis( Dates[ -1 ], X[ , 1 ], "weekly 2yr rates" ); +PerformIidAnalysis( Dates[ -1 ], X[ , 2 ], "weekly 4yr rates" ); + +# scatter plot of invariants +dev.new(); +plot( X[ , 1 ], X[ , 2 ], xlab = "2yr rate", ylab = "4yr rate" ); +m = 0 * matrix( apply( X, 2, mean ) ); # estimator shrunk to zero +S = cov(X); +TwoDimEllipsoid(m, S, 2, TRUE, FALSE); + + +################################################################################################################## +### Perform PCA +E = eigen(S); +# sort eigenvalues in decreasing order +Index = order(-E$values); +EigVals = EVals[ Index ]; +EigVecs = E$vectors[ , Index ]; + +# plot eigenvectors +dev.new(); +plot( c( 2, 4 ), EigVecs[ , 1 ], type = "l", col = "red", xlab = "Time to maturity, years", ylab = "" ); +lines( c( 2, 4 ), EigVecs[ , 2 ], col = "green" ); +legend("topleft", 1.9, c("1st factor loading","2nd factor loading"), col = c( "red" , "green"), + lty = 1, bg = "gray90" ); + +# factors +F = X %*% EigVecs; +F_std = apply( F, 2, sd); + +dev.new(); # 1-st factor effect +plot( c( 2, 4), Current_Curve, type = "l", xlab = "Time to maturity, years", ylab = "", ylim = c( 4.9, 5.3) ); +lines( c( 2, 4), matrix( Current_Curve ) + F_std[ 1 ] * EigVecs[ , 1 ], col = "red" ); +lines( c( 2, 4), matrix( Current_Curve ) - F_std[ 1 ] * EigVecs[ , 1 ], col = "green" ); +legend("topleft", 1.9, c( "base", "+1 sd of 1st fact","-1 sd of 1st fact"), col = c( "black", "red" , "green"), + lty = 1, bg = "gray90" ); +dev.new(); # 2-nd factor effect +plot( c( 2, 4), Current_Curve, type = "l", xlab = "Time to maturity, years", ylab = "", ylim = c( 5, 5.15) ); +lines( c( 2, 4), matrix( Current_Curve ) + F_std[ 2 ] * EigVecs[ , 2 ], col = "red" ); +lines( c( 2, 4), matrix( Current_Curve ) - F_std[ 2 ] * EigVecs[ , 2 ], col = "green" ); +legend("topleft", 1.9, c( "base", "+1 sd of 2nd fact","-1 sd of 2nd fact"), col = c( "black", "red" , "green"), + lty = 1, bg = "gray90" ); +# generalized R2 +R2 = cumsum(EigVals) / sum(EigVals); # first entry: one factor, second entry: both factors +disp(R2); + Modified: pkg/Meucci/man/GenerateUniformDrawsOnUnitSphere.Rd =================================================================== --- pkg/Meucci/man/GenerateUniformDrawsOnUnitSphere.Rd 2013-07-31 20:51:16 UTC (rev 2690) +++ pkg/Meucci/man/GenerateUniformDrawsOnUnitSphere.Rd 2013-08-01 09:13:53 UTC (rev 2691) @@ -19,9 +19,9 @@ Springer, 2005. } \note{ - \item{ Initial script by Xiaoyu Wang - Dec 2006} \item{ - We decompose X=U*R, where U is a uniform distribution on - unit sphere and + Initial script by Xiaoyu Wang - Dec 2006 We decompose + X=U*R, where U is a uniform distribution on unit sphere + and } \author{ Xavier Valls \email{flamejat at gmail.com} From noreply at r-forge.r-project.org Fri Aug 2 01:05:20 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 01:05:20 +0200 (CEST) Subject: [Returnanalytics-commits] r2692 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130801230521.023251812D4@r-forge.r-project.org> Author: pulkit Date: 2013-08-02 01:05:20 +0200 (Fri, 02 Aug 2013) New Revision: 2692 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R Log: CDaR Multipath Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R 2013-08-01 09:13:53 UTC (rev 2691) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R 2013-08-01 23:05:20 UTC (rev 2692) @@ -1,6 +1,7 @@ CDD<-function (R, weights = NULL, geometric = TRUE, invert = TRUE, p = 0.95, ...) { + alpha = p #p = .setalphaprob(p) if (is.vector(R) || ncol(R) == 1) { R = na.omit(R) From noreply at r-forge.r-project.org Fri Aug 2 01:08:54 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 01:08:54 +0200 (CEST) Subject: [Returnanalytics-commits] r2693 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130801230854.2EEAE184699@r-forge.r-project.org> Author: pulkit Date: 2013-08-02 01:08:53 +0200 (Fri, 02 Aug 2013) New Revision: 2693 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R Log: CDaR Multipath Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-01 23:08:53 UTC (rev 2693) @@ -0,0 +1,36 @@ +#'@title +#'Conditional Drawdown at Risk for Multiple Sample Path +#' +#'@desctipion +#' +#' For a given \eqn{\alpha \epsilon [0,1]} in the multiple sample-paths setting,CDaR, +#' denoted by \eqn{D_{\alpha}(w)}, is the average of \eqn{(1-\alpha).100\%} drawdowns +#' of the set {d_st|t=1,....T,s = 1,....S}, and is defined by +#' +#' \deqn{D_\alpha(w) = \max_{{q_st}{\epsilon}Q}{\sum_{s=1}^S}{\sum_{t=1}^T}{p_s}{q_st}{d_st}}, +#' +#' where +#' +#' \deqn{Q = \left\{ \left\{ q_st\right\}_{s,t=1}^{S,T} | \sum_{s = 1}^S \sum_{t = 1}^T{p_s}{q_st} = 1, 0{\leq}q_st{\leq}\frac{1}{(1-\alpha)T}, s = 1....S, t = 1.....T \right\}} +#' +#' For \eqn{\alpha = 1} , \eqn{D_\alpha(w)} is defined by (3) with the constraint \eqn{0{\leq}q_st{\leq}\frac{1}{(1-\alpha)T}}, +#' in Q replaced by \eqn{q_st{\geq}0} +#' +#' As in the case of a single sample-path, the CDaR definition includes two special cases : (i) for \eqn{\alpha = 1}, +#' \eqn{D_1(w)} is the maximum drawdown, also called drawdown from peak-to-valley, and (ii) for \eqn{\alpha} = 0, \eqn{D_\alpha(w)} +#' is the average drawdown +#' +#'@param R an xts, vector, matrix,data frame, timeSeries or zoo object of multiple sample path returns +#'@param ps the probability for each sample path +#'@param p confidence level for calculation ,default(p=0.95) +#'@param \dots any other passthru parameters +#' +#'@references +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) with Drawdown Measure. +#'Research Report 2012-9, ISE Dept., University of Florida, September 2012 + + + +CdarMultiPath<-function(){ + +} \ No newline at end of file From noreply at r-forge.r-project.org Fri Aug 2 01:13:26 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 01:13:26 +0200 (CEST) Subject: [Returnanalytics-commits] r2694 - in pkg/FactorAnalytics: R man Message-ID: <20130801231326.315DB185BE3@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 01:13:25 +0200 (Fri, 02 Aug 2013) New Revision: 2694 Modified: pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd Log: add up/down beta and quadratic term option in fitTimeSeriesFactorModel.R Modified: pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-01 23:08:53 UTC (rev 2693) +++ pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-01 23:13:25 UTC (rev 2694) @@ -1,8 +1,12 @@ #' Fit time series factor model by time series regression techniques. #' -#' Fit time series factor model by time series regression techniques. It +#' @description Fit time series factor model by time series regression techniques. It #' creates the class of "TimeSeriesFactorModel". #' +#' @details add.up.market.returns adds a max(0,Rm-Rf) term in the regression as suggested by +#' Merton-Henriksson Model (1981) to measure market timing. The coefficient can be interpreted as +#' number of free put options. +#' #' If \code{Robust} is chosen, there is no subsets but all factors will be #' used. Cp is defined in #' http://www-stat.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf. p17. @@ -10,8 +14,8 @@ #' @param assets.names names of assets returns. #' @param factors.names names of factors returns. #' @param num.factor.subset scalar. Number of factors selected by all subsets. -#' @param data a vector, matrix, data.frame, xts, timeSeries or zoo object with asset returns -#' and factors retunrs rownames +#' @param data a vector, matrix, data.frame, xts, timeSeries or zoo object with \code{assets.names} +#' and \code{factors.names} or \code{excess.market.returns.name} if necassary. #' @param fit.method "OLS" is ordinary least squares method, "DLS" is #' discounted least squares method. Discounted least squares (DLS) estimation #' is weighted least squares estimation with exponentially declining weights @@ -32,24 +36,38 @@ #' in all models. #' @param subsets.method control option for all subsets. se exhaustive search, #' forward selection, backward selection or sequential replacement to search. -#' @param lars.criteria either choose minimum "Cp": unbiased estimator of the +#' @param lars.criteria either choose minimum "cp": unbiased estimator of the #' true rist or "cv" 10 folds cross-validation. Default is "Cp". See detail. +#' @param add.up.market.returns Logical. If \code{TRUE}, max(0,Rm-Rf) will be added as a regressor. +#' Default is \code{FALSE}. \code{excess.market.returns.nam} is required if \code{TRUE}. See Detail. +#' @param add.quadratic.term Logical. If \code{TRUE}, (Rm-Rf)^2 will be added as a regressor. +#' \code{excess.market.returns.name} is required if \code{TRUE}. Default is \code{FALSE}. +#' @param excess.market.returns.name colnames +#' market returns minus risk free rate. (Rm-Rf). #' @return an S3 object containing #' \itemize{ -#' \item{asset.fit}{Fit objects for each asset. This is the class "lm" for +#' \item{asset.fit} {Fit objects for each asset. This is the class "lm" for #' each object.} -#' \item{alpha}{N x 1 Vector of estimated alphas.} -#' \item{beta}{N x K Matrix of estimated betas.} -#' \item{r2}{N x 1 Vector of R-square values.} -#' \item{resid.variance}{N x 1 Vector of residual variances.} -#' \item{call}{function call.} +#' \item{alpha} {N x 1 Vector of estimated alphas.} +#' \item{beta} {N x K Matrix of estimated betas.} +#' \item{r2} {N x 1 Vector of R-square values.} +#' \item{resid.variance} {N x 1 Vector of residual variances.} +#' \item{call} {function call.} #' } +#' +#' +#' interpreted as number #' @author Eric Zivot and Yi-An Chen. -#' @references 1. Efron, Hastie, Johnstone and Tibshirani (2002) "Least Angle +#' @references +#' \enumerate{ +#' \item Efron, Hastie, Johnstone and Tibshirani (2002) "Least Angle #' Regression" (with discussion) Annals of Statistics; see also -#' http://www-stat.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf. 2. -#' Hastie, Tibshirani and Friedman (2008) Elements of Statistical Learning 2nd +#' http://www-stat.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf. +#' \item Hastie, Tibshirani and Friedman (2008) Elements of Statistical Learning 2nd #' edition, Springer, NY. +#' \item Christopherson, Carino and Ferson (2009). Portfolio Performance Measurement +#' and Benchmarking, McGraw Hill. +#' } #' @examples #' \dontrun{ #' # load data from the database @@ -72,7 +90,8 @@ variable.selection="none", decay.factor = 0.95,nvmax=8,force.in=NULL, subsets.method = c("exhaustive", "backward", "forward", "seqrep"), - lars.criteria = "Cp") { + lars.criteria = "Cp",add.up.market.returns = FALSE,add.quadratic.term = FALSE, + excess.market.returns.name ) { require(PerformanceAnalytics) require(leaps) @@ -84,7 +103,9 @@ # convert data into xts and hereafter compute in xts data.xts <- checkData(data) reg.xts <- merge(data.xts[,assets.names],data.xts[,factors.names]) - + if (add.up.market.returns == TRUE || add.quadratic.term == TRUE ) { + reg.xts <- merge(reg.xts,data.xts[,excess.market.returns.name]) + } # initialize list object to hold regression objects reg.list = list() @@ -93,17 +114,38 @@ # residual variances, and R-square values from # fitted factor models -Alphas = ResidVars = R2values = rep(0, length(assets.names)) +Alphas = ResidVars = R2values = rep(NA, length(assets.names)) names(Alphas) = names(ResidVars) = names(R2values) = assets.names -Betas = matrix(0, length(assets.names), length(factors.names)) +Betas = matrix(NA, length(assets.names), length(factors.names)) colnames(Betas) = factors.names rownames(Betas) = assets.names - +if(add.up.market.returns == TRUE ) { + Betas <- cbind(Betas,rep(NA,length(assets.names))) + colnames(Betas)[dim(Betas)[2]] <- "up.beta" +} + +if(add.quadratic.term == TRUE ) { + Betas <- cbind(Betas,rep(NA,length(assets.names))) + colnames(Betas)[dim(Betas)[2]] <- "quadratic.term" +} + +# +### plain vanila method +# if (variable.selection == "none") { if (fit.method == "OLS") { for (i in assets.names) { - reg.df = na.omit(reg.xts[, c(i, factors.names)]) + reg.df = na.omit(reg.xts[, c(i, factors.names)]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } fm.formula = as.formula(paste(i,"~", ".", sep=" ")) fm.fit = lm(fm.formula, data=reg.df) fm.summary = summary(fm.fit) @@ -117,6 +159,15 @@ } else if (fit.method == "DLS") { for (i in assets.names) { reg.df = na.omit(reg.xts[, c(i, factors.names)]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } t.length <- nrow(reg.df) w <- rep(decay.factor^(t.length-1),t.length) for (k in 2:t.length) { @@ -137,6 +188,15 @@ } else if (fit.method=="Robust") { for (i in assets.names) { reg.df = na.omit(reg.xts[, c(i, factors.names)]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } fm.formula = as.formula(paste(i,"~", ".", sep=" ")) fm.fit = lmRob(fm.formula, data=reg.df) fm.summary = summary(fm.fit) @@ -151,17 +211,27 @@ stop("invalid method") } - -} else if (variable.selection == "all subsets") { +# +### subset methods +# +} + else if (variable.selection == "all subsets") { # estimate multiple factor model using loop b/c of unequal histories for the hedge funds - - if (fit.method == "OLS") { if (num.factor.subset == length(force.in)) { for (i in assets.names) { reg.df = na.omit(reg.xts[, c(i, force.in)]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } fm.formula = as.formula(paste(i,"~", ".", sep=" ")) fm.fit = lm(fm.formula, data=reg.df) fm.summary = summary(fm.fit) @@ -181,6 +251,15 @@ method=subsets.method) sum.sub <- summary(fm.subsets) reg.df <- na.omit(reg.xts[,c(i,names(which(sum.sub$which[as.character(num.factor.subset),-1]==TRUE)) )]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } fm.fit = lm(fm.formula, data=reg.df) fm.summary = summary(fm.fit) reg.list[[i]] = fm.fit @@ -197,13 +276,23 @@ -} else if (fit.method == "DLS"){ +} +else if (fit.method == "DLS"){ if (num.factor.subset == length(force.in)) { # define weight matrix for (i in assets.names) { reg.df = na.omit(reg.xts[, c(i, force.in)]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } t.length <- nrow(reg.df) w <- rep(decay.factor^(t.length-1),t.length) for (k in 2:t.length) { @@ -235,6 +324,15 @@ method=subsets.method,weights=w) # w is called from global envio sum.sub <- summary(fm.subsets) reg.df <- na.omit(reg.xts[,c(i,names(which(sum.sub$which[as.character(num.factor.subset),-1]==TRUE)) )]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } fm.fit = lm(fm.formula, data=reg.df,weight=w) fm.summary = summary(fm.fit) reg.list[[i]] = fm.fit @@ -249,31 +347,50 @@ } -} else if (fit.method=="Robust") { +} +else if (fit.method=="Robust") { for (i in assets.names) { - reg.df = na.omit(reg.xts[, c(i, factors.names)]) - fm.formula = as.formula(paste(i,"~", ".", sep=" ")) - fm.fit = lmRob(fm.formula, data=reg.df) - fm.summary = summary(fm.fit) - reg.list[[i]] = fm.fit - Alphas[i] = coef(fm.fit)[1] - Betas[i, ] = coef(fm.fit)[-1] - ResidVars[i] = fm.summary$sigma^2 - R2values[i] = fm.summary$r.squared - } + reg.df = na.omit(reg.xts[, c(i, factors.names)]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } + fm.formula = as.formula(paste(i,"~", ".", sep=" ")) + fm.fit = lmRob(fm.formula, data=reg.df) + fm.summary = summary(fm.fit) + reg.list[[i]] = fm.fit + Alphas[i] = coef(fm.fit)[1] + Betas[i, ] = coef(fm.fit)[-1] + ResidVars[i] = fm.summary$sigma^2 + R2values[i] = fm.summary$r.squared + } } else { stop("invalid method") } -} else if (variable.selection == "stepwise") { +} + else if (variable.selection == "stepwise") { - if (fit.method == "OLS") { # loop over all assets and estimate time series regression for (i in assets.names) { reg.df = na.omit(reg.xts[, c(i, factors.names)]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } fm.formula = as.formula(paste(i,"~", ".", sep=" ")) fm.fit = step(lm(fm.formula, data=reg.df),trace=0) fm.summary = summary(fm.fit) @@ -286,10 +403,20 @@ } -} else if (fit.method == "DLS"){ +} + else if (fit.method == "DLS"){ # define weight matrix for (i in assets.names) { reg.df = na.omit(reg.xts[, c(i, factors.names)]) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } t.length <- nrow(reg.df) w <- rep(decay.factor^(t.length-1),t.length) for (k in 2:t.length) { @@ -308,9 +435,24 @@ R2values[i] = fm.summary$r.squared } -} else if (fit.method=="Robust") { - for (i in assets.names) { - assign("reg.df" , na.omit(reg.xts[, c(i, factors.names)]),envir = .GlobalEnv ) +} + else if (fit.method =="Robust") { + for (i in assets.names) { + assign("reg.df" , na.omit(reg.xts[, c(i, factors.names)]),envir = .GlobalEnv ) +# reg.df = na.omit(reg.xts[, c(i, factors.names)],envir = .GlobalEnv) + if(add.up.market.returns == TRUE) { + stop("This function does not support add.up.market.returns and stepwise variable.selection + together Please choose either one.") + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + stop("This function does not support add.up.market.returns and stepwise variable.selection + together. Please choose either one.") + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } fm.formula = as.formula(paste(i,"~", ".", sep=" ")) lmRob.obj <- lmRob(fm.formula, data=reg.df) fm.fit = step.lmRob(lmRob.obj,trace=FALSE) @@ -330,10 +472,19 @@ for (i in assets.names) { reg.df = na.omit(reg.xts[, c(i, factors.names)]) - reg.df = as.matrix(reg.df) + if(add.up.market.returns == TRUE) { + up.beta <- apply(reg.xts[,excess.market.returns.name],1,max,0) + reg.df = merge(reg.df,up.beta) + } + if(add.quadratic.term == TRUE) { + quadratic.term <- reg.xts[,excess.market.returns.name]^2 + reg.df = merge(reg.df,quadratic.term) + colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" + } + reg.df = as.matrix(na.omit(reg.df)) lars.fit = lars(reg.df[,factors.names],reg.df[,i],type=variable.selection,trace=FALSE) sum.lars <- summary(lars.fit) - if (lars.criteria == "Cp") { + if (lars.criteria == "cp") { s<- which.min(sum.lars$Cp) } else { lars.cv <- cv.lars(reg.df[,factors.names],reg.df[,i],trace=FALSE, Modified: pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd 2013-08-01 23:08:53 UTC (rev 2693) +++ pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd 2013-08-01 23:13:25 UTC (rev 2694) @@ -8,7 +8,8 @@ variable.selection = "none", decay.factor = 0.95, nvmax = 8, force.in = NULL, subsets.method = c("exhaustive", "backward", "forward", "seqrep"), - lars.criteria = "Cp") + lars.criteria = "Cp", add.up.market.returns = FALSE, + add.quadratic.term = FALSE, excess.market.returns.name) } \arguments{ \item{assets.names}{names of assets returns.} @@ -19,8 +20,9 @@ selected by all subsets.} \item{data}{a vector, matrix, data.frame, xts, timeSeries - or zoo object with asset returns and factors retunrs - rownames} + or zoo object with \code{assets.names} and + \code{factors.names} or \code{excess.market.returns.name} + if necassary.} \item{fit.method}{"OLS" is ordinary least squares method, "DLS" is discounted least squares method. Discounted @@ -55,18 +57,33 @@ exhaustive search, forward selection, backward selection or sequential replacement to search.} - \item{lars.criteria}{either choose minimum "Cp": unbiased + \item{lars.criteria}{either choose minimum "cp": unbiased estimator of the true rist or "cv" 10 folds cross-validation. Default is "Cp". See detail.} + + \item{add.up.market.returns}{Logical. If \code{TRUE}, + max(0,Rm-Rf) will be added as a regressor. Default is + \code{FALSE}. \code{excess.market.returns.nam} is + required if \code{TRUE}. See Detail.} + + \item{add.quadratic.term}{Logical. If \code{TRUE}, + (Rm-Rf)^2 will be added as a regressor. + \code{excess.market.returns.name} is required if + \code{TRUE}. Default is \code{FALSE}.} + + \item{excess.market.returns.name}{colnames market returns + minus risk free rate. (Rm-Rf).} } \value{ - an S3 object containing \itemize{ \item{asset.fit}{Fit + an S3 object containing \itemize{ \item{asset.fit} {Fit objects for each asset. This is the class "lm" for each - object.} \item{alpha}{N x 1 Vector of estimated alphas.} - \item{beta}{N x K Matrix of estimated betas.} \item{r2}{N - x 1 Vector of R-square values.} \item{resid.variance}{N x - 1 Vector of residual variances.} \item{call}{function - call.} } + object.} \item{alpha} {N x 1 Vector of estimated alphas.} + \item{beta} {N x K Matrix of estimated betas.} \item{r2} + {N x 1 Vector of R-square values.} \item{resid.variance} + {N x 1 Vector of residual variances.} \item{call} + {function call.} } + + interpreted as number } \description{ Fit time series factor model by time series regression @@ -74,6 +91,11 @@ "TimeSeriesFactorModel". } \details{ + add.up.market.returns adds a max(0,Rm-Rf) term in the + regression as suggested by Merton-Henriksson Model (1981) + to measure market timing. The coefficient can be + interpreted as number of free put options. + If \code{Robust} is chosen, there is no subsets but all factors will be used. Cp is defined in http://www-stat.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf. @@ -100,11 +122,13 @@ Eric Zivot and Yi-An Chen. } \references{ - 1. Efron, Hastie, Johnstone and Tibshirani (2002) "Least - Angle Regression" (with discussion) Annals of Statistics; - see also + \enumerate{ \item Efron, Hastie, Johnstone and Tibshirani + (2002) "Least Angle Regression" (with discussion) Annals + of Statistics; see also http://www-stat.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf. - 2. Hastie, Tibshirani and Friedman (2008) Elements of - Statistical Learning 2nd edition, Springer, NY. + \item Hastie, Tibshirani and Friedman (2008) Elements of + Statistical Learning 2nd edition, Springer, NY. \item + Christopherson, Carino and Ferson (2009). Portfolio + Performance Measurement and Benchmarking, McGraw Hill. } } From noreply at r-forge.r-project.org Fri Aug 2 02:09:21 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 02:09:21 +0200 (CEST) Subject: [Returnanalytics-commits] r2695 - pkg/FactorAnalytics/R Message-ID: <20130802000922.0A59C183EBB@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 02:09:21 +0200 (Fri, 02 Aug 2013) New Revision: 2695 Removed: pkg/FactorAnalytics/R/bootstrapFactorESdecomposition.r pkg/FactorAnalytics/R/bootstrapFactorVaRdecomposition.r pkg/FactorAnalytics/R/chart.RollingStyle.R pkg/FactorAnalytics/R/chart.Style.R Modified: pkg/FactorAnalytics/R/ Log: ignore function that haven't been reviewed yet. Property changes on: pkg/FactorAnalytics/R ___________________________________________________________________ Modified: svn:ignore - covEWMA.R plot.MacroFactorModel.r print.MacroFactorModel.r summary.MacroFactorModel.r + bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R plot.MacroFactorModel.r print.MacroFactorModel.r summary.MacroFactorModel.r Deleted: pkg/FactorAnalytics/R/bootstrapFactorESdecomposition.r =================================================================== --- pkg/FactorAnalytics/R/bootstrapFactorESdecomposition.r 2013-08-01 23:13:25 UTC (rev 2694) +++ pkg/FactorAnalytics/R/bootstrapFactorESdecomposition.r 2013-08-02 00:09:21 UTC (rev 2695) @@ -1,83 +0,0 @@ -bootstrapFactorESdecomposition <- function(bootData, beta.vec, sig2.e, tail.prob = 0.01, - method=c("average"), - VaR.method=c("HS", "CornishFisher")) { -## Compute factor model ES decomposition based on Euler's theorem given bootstrap data -## and factor model parameters. -## The partial derivative of ES wrt factor beta is computed -## as the expected factor return given fund return is less than or equal to portfolio VaR -## VaR is compute either as the sample quantile or as an estimated quantile -## using the Cornish-Fisher expansion -## inputs: -## bootData B x (k+2) matrix of bootstrap data. First column contains the fund returns, -## second through k+1 columns contain factor returns, k+2 column contain residuals -## scaled to have variance 1. -## beta.vec k x 1 vector of factor betas -## sig2.e scalar, residual variance from factor model -## tail.prob scalar tail probability -## method character, method for computing marginal ES. Valid choices are -## "average" for approximating E[Fj | R<=VaR] -## VaR.method character, method for computing VaR. Valid choices are "HS" for -## historical simulation (empirical quantile); "CornishFisher" for -## modified VaR based on Cornish-Fisher quantile estimate. Cornish-Fisher -## computation is done with the VaR.CornishFisher in the PerformanceAnalytics -## package -## output: -## Output: -## A list with the following components: -## ES.fm scalar, bootstrap ES value for fund reported as a positive number -## mcES.fm k+1 x 1 vector of factor marginal contributions to ES -## cES.fm k+1 x 1 vector of factor component contributions to ES -## pcES.fm k+1 x 1 vector of factor percent contributions to ES -## Remarks: -## The factor model has the form -## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) -## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' -## By Euler's theorem -## ES.fm = sum(cES.fm) = sum(beta.star*mcES.fm) -## References: -## 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A General Analysis", -## The Journal of Risk 5/2. -## 2. Yamai and Yoshiba (2002). "Comparative Analyses of Expected Shortfall and -## Value-at-Risk: Their Estimation Error, Decomposition, and Optimization -## Bank of Japan. -## 3. Meucci (2007). "Risk Contributions from Generic User-Defined Factors," Risk. - require(PerformanceAnalytics) - VaR.method = VaR.method[1] - bootData = as.matrix(bootData) - ncol.bootData = ncol(bootData) - beta.names = c(names(beta.vec), "residual") - #beta.vec = as.vector(beta.vec) - beta.star.vec = c(beta.vec, sqrt(sig2.e)) - names(beta.star.vec) = beta.names - - if (VaR.method == "HS") { - VaR.fm = quantile(bootData[, 1], prob=tail.prob) - idx = which(bootData[, 1] <= VaR.fm) - ES.fm = -mean(bootData[idx, 1]) - } else { - VaR.fm = -VaR.CornishFisher(bootData[, 1], p=(1-tail.prob)) - idx = which(bootData[, 1] <= pVaR) - ES.fm = -mean(bootData[idx, 1]) - } - ## - ## compute marginal contribution to ES - ## - if (method == "average") { - ## compute marginal ES as expected value of factor return given fund - ## return is less than or equal to VaR - mcES.fm = -as.matrix(colMeans(bootData[idx, -1])) - } else { - stop("invalid method") - } - -## compute correction factor so that sum of weighted marginal ES adds to portfolio ES -#cf = as.numeric( ES.fm / sum(mcES.fm*beta.star.vec) ) -#mcES.fm = cf*mcES.fm -cES.fm = mcES.fm*beta.star.vec -pcES.fm = cES.fm/ES.fm -ans = list(ES.fm = ES.fm, - mcES.fm = mcES.fm, - cES.fm = cES.fm, - pcES.fm = pcES.fm) -return(ans) -} Deleted: pkg/FactorAnalytics/R/bootstrapFactorVaRdecomposition.r =================================================================== --- pkg/FactorAnalytics/R/bootstrapFactorVaRdecomposition.r 2013-08-01 23:13:25 UTC (rev 2694) +++ pkg/FactorAnalytics/R/bootstrapFactorVaRdecomposition.r 2013-08-02 00:09:21 UTC (rev 2695) @@ -1,90 +0,0 @@ -bootstrapFactorVaRdecomposition <- function(bootData, beta.vec, sig2.e, h=NULL, tail.prob = 0.01, - method=c("average"), - VaR.method=c("HS", "CornishFisher")) { -## Compute factor model VaR decomposition based on Euler's theorem given bootstrap data -## and factor model parameters. -## The partial derivative of VaR wrt factor beta is computed -## as the expected factor return given fund return is equal to portfolio VaR -## VaR is compute either as the sample quantile or as an estimated quantile -## using the Cornish-Fisher expansion -## inputs: -## bootData B x (k+2) matrix of bootstrap data. First column contains the fund returns, -## second through k+1 columns contain factor returns, k+2 column contain residuals -## scaled to have variance 1. -## beta.vec k x 1 vector of factor betas -## sig2.e scalar, residual variance from factor model -## h integer, number of obvs on each side of VaR. Default is h=round(sqrt(B)/2) -## tail.prob scalar tail probability -## method character, method for computing marginal VaR. Valid choices are -## "average" for approximating E[Fj | R=VaR] -## VaR.method character, method for computing VaR. Valid choices are "HS" for -## historical simulation (empirical quantile); "CornishFisher" for -## modified VaR based on Cornish-Fisher quantile estimate. Cornish-Fisher -## computation is done with the VaR.CornishFisher in the PerformanceAnalytics -## package -## output: -## Output: -## A list with the following components: -## VaR.fm scalar, bootstrap VaR value for fund reported as a positive number -## mcVaR.fm k+1 x 1 vector of factor marginal contributions to VaR -## cVaR.fm k+1 x 1 vector of factor component contributions to VaR -## pcVaR.fm k+1 x 1 vector of factor percent contributions to VaR -## Remarks: -## The factor model has the form -## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) -## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' -## By Euler's theorem -## VaR.fm = sum(cVaR.fm) = sum(beta.star*mcVaR.fm) -## References: -## 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A General Analysis", -## The Journal of Risk 5/2. -## 2. Yamai and Yoshiba (2002). "Comparative Analyses of Expected Shortfall and -## Value-at-Risk: Their Estimation Error, Decomposition, and Optimization -## Bank of Japan. -## 3. Meucci (2007). "Risk Contributions from Generic User-Defined Factors," Risk. - require(PerformanceAnalytics) - VaR.method = VaR.method[1] - bootData = as.matrix(bootData) - ncol.bootData = ncol(bootData) - beta.names = c(names(beta.vec), "residual") - #beta.vec = as.vector(beta.vec) - beta.star.vec = c(beta.vec, sqrt(sig2.e)) - names(beta.star.vec) = beta.names - - # determine number of obvs to average around VaR - if (is.null(h)) { - h = round(sqrt(nrow(bootData))) - } else h = round(h) - - if (VaR.method == "HS") { - VaR.fm = -quantile(bootData[,1], prob=tail.prob) - } else { - VaR.fm = VaR.CornishFisher(bootData[,1], p=(1-tail.prob)) - } - ## - ## compute marginal contribution to VaR - ## - if (method == "average") { - ## compute marginal VaR as expected value of fund return given portfolio - ## return is equal to portfolio VaR - r.sort = sort(bootData[,1]) - idx.lower = which(r.sort <= -VaR.fm) - idx.upper = which(r.sort > -VaR.fm) - r.vals = c(r.sort[tail(idx.lower,n=h)], r.sort[head(idx.upper,n=h)]) - idx = which(bootData[,1] %in% r.vals) - mcVaR.fm = -as.matrix(colMeans(bootData[idx,-1])) - } else { - stop("invalid method") - } - -## compute correction factor so that sum of weighted marginal VaR adds to portfolio VaR -cf = as.numeric( VaR.fm / sum(mcVaR.fm*beta.star.vec) ) -mcVaR.fm = cf*mcVaR.fm -cVaR.fm = mcVaR.fm*beta.star.vec -pcVaR.fm = cVaR.fm/VaR.fm -ans = list(VaR.fm = VaR.fm, - mcVaR.fm = mcVaR.fm, - cVaR.fm = cVaR.fm, - pcVaR.fm = pcVaR.fm) -return(ans) -} Deleted: pkg/FactorAnalytics/R/chart.RollingStyle.R =================================================================== --- pkg/FactorAnalytics/R/chart.RollingStyle.R 2013-08-01 23:13:25 UTC (rev 2694) +++ pkg/FactorAnalytics/R/chart.RollingStyle.R 2013-08-02 00:09:21 UTC (rev 2695) @@ -1,52 +0,0 @@ -chart.RollingStyle <- -function (R.fund, R.style, method = c("constrained","unconstrained","normalized"), leverage = FALSE, width = 12, main = NULL, space = 0, ...) -{ # @author Peter Carl - - result<-table.RollingStyle(R.fund=R.fund, R.style=R.style, method=method,leverage=leverage,width=width) - - if (is.null(main)){ - freq = periodicity(R.fund) - - switch(freq$scale, - minute = {freq.lab = "minute"}, - hourly = {freq.lab = "hour"}, - daily = {freq.lab = "day"}, - weekly = {freq.lab = "week"}, - monthly = {freq.lab = "month"}, - quarterly = {freq.lab = "quarter"}, - yearly = {freq.lab = "year"} - ) - - main = paste(colnames(R.fund)[1]," Rolling ", width ,"-",freq.lab," Style Weights", sep="") - } - - chart.StackedBar(result, main = main, space = space, ...) - -} - -############################################################################### -# R (http://r-project.org/) Econometrics for Performance and Risk Analysis -# -# Copyright (c) 2004-2007 Peter Carl and Brian G. Peterson -# -# This library is distributed under the terms of the GNU Public License (GPL) -# for full details see the file COPYING -# -# $Id$ -# -############################################################################### -# $Log: not supported by cvs2svn $ -# Revision 1.4 2009-10-15 21:50:19 brian -# - updates to add automatic periodicity to chart labels, and support different frequency data -# -# Revision 1.3 2008-07-11 03:22:01 peter -# - removed unnecessary function attributes -# -# Revision 1.2 2008-04-18 03:59:52 peter -# - added na.omit to avoid problems with missing data -# -# Revision 1.1 2008/02/23 05:55:21 peter -# - chart demonstrating fund exposures through time -# -# -############################################################################### Deleted: pkg/FactorAnalytics/R/chart.Style.R =================================================================== --- pkg/FactorAnalytics/R/chart.Style.R 2013-08-01 23:13:25 UTC (rev 2694) +++ pkg/FactorAnalytics/R/chart.Style.R 2013-08-02 00:09:21 UTC (rev 2695) @@ -1,195 +0,0 @@ -#' calculate and display effective style weights -#' -#' Functions that calculate effective style weights and display the results in -#' a bar chart. \code{chart.Style} calculates and displays style weights -#' calculated over a single period. \code{chart.RollingStyle} calculates and -#' displays those weights in rolling windows through time. \code{style.fit} -#' manages the calculation of the weights by method. \code{style.QPfit} -#' calculates the specific constraint case that requires quadratic programming. -#' -#' These functions calculate style weights using an asset class style model as -#' described in detail in Sharpe (1992). The use of quadratic programming to -#' determine a fund's exposures to the changes in returns of major asset -#' classes is usually refered to as "style analysis". -#' -#' The "unconstrained" method implements a simple factor model for style -#' analysis, as in: \deqn{Ri = bi1*F1+bi2*F2+...+bin*Fn+ei}{R_i = -#' b_{i1}F_1+b_{i2}F_2+\dots+b_{in}F_n +e_i} where \eqn{Ri}{R_i} represents the -#' return on asset i, \eqn{Fj}{F_j} represents each factor, and \eqn{ei}{e_i} -#' represents the "non-factor" component of the return on i. This is simply a -#' multiple regression analysis with fund returns as the dependent variable and -#' asset class returns as the independent variables. The resulting slope -#' coefficients are then interpreted as the fund's historic exposures to asset -#' class returns. In this case, coefficients do not sum to 1. -#' -#' The "normalized" method reports the results of a multiple regression -#' analysis similar to the first, but with one constraint: the coefficients are -#' required to add to 1. Coefficients may be negative, indicating short -#' exposures. To enforce the constraint, coefficients are normalized. -#' -#' The "constrained" method includes the constraint that the coefficients sum -#' to 1, but adds that the coefficients must lie between 0 and 1. These -#' inequality constraints require a quadratic programming algorithm using -#' \code{\link[quadprog]{solve.QP}} from the 'quadprog' package, and the -#' implementation is discussed under \code{\link{style.QPfit}}. If set to -#' TRUE, "leverage" allows the sum of the coefficients to exceed 1. -#' -#' According to Sharpe (1992), the calculation for the constrained case is -#' represented as: \deqn{min var(Rf - sum[wi * R.si]) = min var(F - w*S)}{min -#' \sigma(R_f - \sum{w_i * R_s_i}) = min \sigma(F - w*S)} \deqn{s.t. sum[wi] = -#' 1; wi > 0}{ s.t. \sum{w_i} = 1; w_i > 0} -#' -#' Remembering that: -#' -#' \deqn{\sigma(aX + bY) = a^2 \sigma(X) + b^2 \sigma(Y) + 2ab cov(X,Y) = -#' \sigma(R.f) + w'*V*w - 2*w'*cov(R.f,R.s)} -#' -#' we can drop \eqn{var(Rf)}{\sigma(R_f)} as it isn't a function of weights, -#' multiply both sides by 1/2: -#' -#' \deqn{= min (1/2) w'*V*w - C'w}{= min (1/2) w'*V*w - C'w} \deqn{ s.t. w'*e = -#' 1, w_i > 0}{ s.t. w'*e = 1, w_i > 0} -#' -#' Which allows us to use \code{\link[quadprog]{solve.QP}}, which is specified -#' as: \deqn{min(-d' b + 1/2 b' D b)}{min(-d' b + 1/2 b' D b)} and the -#' constraints \deqn{ A' b >= b.0 }{ A' b >= b_0 } -#' -#' so: b is the weight vector, D is the variance-covariance matrix of the -#' styles d is the covariance vector between the fund and the styles -#' -#' The chart functions then provide a graphical summary of the results. The -#' underlying function, \code{\link{style.fit}}, provides the outputs of the -#' analysis and more information about fit, including an R-squared value. -#' -#' Styles identified in this analysis may be interpreted as an average of -#' potentially changing exposures over the period covered. The function -#' \code{\link{chart.RollingStyle}} may be useful for examining the behavior of -#' a manager's average exposures to asset classes over time, using a -#' rolling-window analysis. -#' -#' The chart functions plot a column chart or stacked column chart of the -#' resulting style weights to the current device. Both \code{style.fit} and -#' \code{style.QPfit} produce a list of data frames containing 'weights' and -#' 'R.squared' results. If 'model' = TRUE in \code{style.QPfit}, the full -#' result set is shown from the output of \code{solve.QP}. -#' -#' @aliases chart.Style chart.RollingStyle table.RollingStyle style.fit -#' style.QPfit -#' @param R.fund matrix, data frame, or zoo object with fund returns to be -#' analyzed -#' @param R.style matrix, data frame, or zoo object with style index returns. -#' Data object must be of the same length and time-aligned with R.fund -#' @param method specify the method of calculation of style weights as -#' "constrained", "unconstrained", or "normalized". For more information, see -#' \code{\link{style.fit}} -#' @param leverage logical, defaults to 'FALSE'. If 'TRUE', the calculation of -#' weights assumes that leverage may be used. For more information, see -#' \code{\link{style.fit}} -#' @param model logical. If 'model' = TRUE in \code{\link{style.QPfit}}, the -#' full result set is shown from the output of \code{solve.QP}. -#' @param selection either "none" (default) or "AIC". If "AIC", then the -#' function uses a stepwise regression to identify find the model with minimum -#' AIC value. See \code{\link{step}} for more detail. -#' @param unstacked logical. If set to 'TRUE' \emph{and} only one row of data -#' is submitted in 'w', then the chart creates a normal column chart. If more -#' than one row is submitted, then this is ignored. See examples below. -#' @param space the amount of space (as a fraction of the average bar width) -#' left before each bar, as in \code{\link{barplot}}. Default for -#' \code{chart.RollingStyle} is 0; for \code{chart.Style} the default is 0.2. -#' @param main set the chart title, same as in \code{\link{plot}} -#' @param width number of periods or window to apply rolling style analysis -#' over -#' @param ylim set the y-axis limit, same as in \code{\link{plot}} -#' @param \dots for the charting functions, these are arguments to be passed to -#' \code{\link{barplot}}. These can include further arguments (such as 'axes', -#' 'asp' and 'main') and graphical parameters (see 'par') which are passed to -#' 'plot.window()', 'title()' and 'axis'. For the calculation functions, these -#' are ignored. -#' @note None of the functions \code{chart.Style}, \code{style.fit}, and -#' \code{style.QPfit} make any attempt to align the two input data series. The -#' \code{chart.RollingStyle}, on the other hand, does merge the two series and -#' manages the calculation over common periods. -#' @author Peter Carl -#' @seealso \code{\link{barplot}}, \code{\link{par}} -#' @references Sharpe, W. Asset Allocation: Management Style and Performance -#' Measurement Journal of Portfolio Management, 1992, 7-19. See \url{ -#' http://www.stanford.edu/~wfsharpe/art/sa/sa.htm} -#' @keywords ts multivariate hplot -#' @examples -#' -#' data(edhec) -#' data(managers) -#' style.fit(managers[97:132,2,drop=FALSE],edhec[85:120,], method="constrained", leverage=FALSE) -#' chart.Style(managers[97:132,2,drop=FALSE],edhec[85:120,], method="constrained", leverage=FALSE, unstack=TRUE, las=3) -#' chart.RollingStyle(managers[,2,drop=FALSE],edhec[,1:11], method="constrained", leverage=FALSE, width=36, cex.legend = .7, colorset=rainbow12equal, las=1) -#' -`chart.Style` <- -function (R.fund, R.style, method = c("constrained", "unconstrained", "normalized"), leverage = FALSE, main = NULL, ylim = NULL, unstacked=TRUE, ...) -{ # @author Peter Carl - - # DESCRIPTION: - # A wrapper to create a chart of relative returns through time - - # R-Squared could deliver adjusted R-Squared if we wanted - - # FUNCTION: - - # Transform input data to a data frame - R.fund = checkData(R.fund) - R.style = checkData(R.style) - method = method[1] - - # Calculate - result = style.fit(R.fund, R.style, method = method, leverage = leverage) - weights = t(as.matrix(result$weights)) - - if(is.null(main)) - main = paste(colnames(R.fund)[1] ," Style Weights", sep="") - - if(is.null(ylim)) - if(method == "constrained" & leverage == FALSE) ylim = c(0,1) - else ylim = NULL - - chart.StackedBar(weights, main = main, ylim = ylim, unstacked = unstacked, ...) -# barplot(weights, main = main, ylim = ylim, ...) - -} - -############################################################################### -# R (http://r-project.org/) Econometrics for Performance and Risk Analysis -# -# Copyright (c) 2004-2007 Peter Carl and Brian G. Peterson -# -# This library is distributed under the terms of the GNU Public License (GPL) -# for full details see the file COPYING -# -# $Id$ -# -############################################################################### -# $Log: not supported by cvs2svn $ -# Revision 1.7 2008-07-11 03:24:52 peter -# - fixed error with alignment of results -# -# Revision 1.6 2008-04-18 03:58:04 peter -# - reduced to a wrapper to chart.StackedBar -# -# Revision 1.5 2008/02/27 04:05:32 peter -# - added 'leverage' tag to eliminate sum to one constraint -# - added cex.names for controlling size of xaxis labels -# -# Revision 1.4 2008/02/26 04:49:06 peter -# - handles single column fits better -# -# Revision 1.3 2008/02/26 04:39:40 peter -# - moved legend and margin control into chart.StackedBar -# - handles multiple columns -# -# Revision 1.2 2008/02/23 05:35:56 peter -# - set ylim more sensibly depending on method -# -# Revision 1.1 2008/02/23 05:32:37 peter -# - simple bar chart of a fund's exposures to a set of factors, as determined -# by style.fit -# -# -############################################################################### From noreply at r-forge.r-project.org Fri Aug 2 02:11:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 02:11:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2696 - pkg/FactorAnalytics/R Message-ID: <20130802001110.5EE23183EBB@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 02:11:09 +0200 (Fri, 02 Aug 2013) New Revision: 2696 Removed: pkg/FactorAnalytics/R/dCornishFisher.R Modified: pkg/FactorAnalytics/R/ Log: Property changes on: pkg/FactorAnalytics/R ___________________________________________________________________ Modified: svn:ignore - bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R plot.MacroFactorModel.r print.MacroFactorModel.r summary.MacroFactorModel.r + bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R dCornishFisher.R plot.MacroFactorModel.r print.MacroFactorModel.r summary.MacroFactorModel.r Deleted: pkg/FactorAnalytics/R/dCornishFisher.R =================================================================== --- pkg/FactorAnalytics/R/dCornishFisher.R 2013-08-02 00:09:21 UTC (rev 2695) +++ pkg/FactorAnalytics/R/dCornishFisher.R 2013-08-02 00:11:09 UTC (rev 2696) @@ -1,8 +0,0 @@ -dCornishFisher <- -function(x, n,skew, ekurt) { - -density <- dnorm(x) + 1/sqrt(n)*(skew/6*(x^3-3*x))*dnorm(x) + - 1/n *( (skew)^2/72*(x^6 - 15*x^4 + 45*x^2 -15) + ekurt/24 *(x^4-6*x^2+3) )*dnorm(x) -return(density) -} - From noreply at r-forge.r-project.org Fri Aug 2 02:13:20 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 02:13:20 +0200 (CEST) Subject: [Returnanalytics-commits] r2697 - pkg/FactorAnalytics/R Message-ID: <20130802001321.09192183EBB@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 02:13:20 +0200 (Fri, 02 Aug 2013) New Revision: 2697 Removed: pkg/FactorAnalytics/R/FactorAnalytics-package.R pkg/FactorAnalytics/R/factorModelFactorRiskDecomposition.r pkg/FactorAnalytics/R/factorModelGroupRiskDecomposition.r pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r pkg/FactorAnalytics/R/factorModelPortfolioRiskDecomposition.r pkg/FactorAnalytics/R/factorModelRiskAttribution.r pkg/FactorAnalytics/R/factorModelRiskDecomposition.r pkg/FactorAnalytics/R/factorModelSimulation.r pkg/FactorAnalytics/R/impliedFactorReturns.R pkg/FactorAnalytics/R/modifiedEsReport.R pkg/FactorAnalytics/R/modifiedIncrementalES.R pkg/FactorAnalytics/R/modifiedIncrementalVaR.R pkg/FactorAnalytics/R/modifiedPortfolioEsDecomposition.R pkg/FactorAnalytics/R/modifiedPortfolioVaRDecomposition.R pkg/FactorAnalytics/R/modifiedVaRReport.R pkg/FactorAnalytics/R/nonparametricEsReport.R pkg/FactorAnalytics/R/nonparametricIncrementalES.R pkg/FactorAnalytics/R/nonparametricIncrementalVaR.R pkg/FactorAnalytics/R/nonparametricPortfolioEsDecomposition.R pkg/FactorAnalytics/R/nonparametricPortfolioVaRDecomposition.R pkg/FactorAnalytics/R/nonparametricVaRReport.R pkg/FactorAnalytics/R/normalEsReport.R pkg/FactorAnalytics/R/normalIncrementalES.R pkg/FactorAnalytics/R/normalIncrementalVaR.R pkg/FactorAnalytics/R/normalPortfolioEsDecomposition.R pkg/FactorAnalytics/R/normalPortfolioVaRDecomposition.R pkg/FactorAnalytics/R/normalVaRReport.R pkg/FactorAnalytics/R/pCornishFisher.R pkg/FactorAnalytics/R/qCornishFisher.R pkg/FactorAnalytics/R/rCornishFisher.R pkg/FactorAnalytics/R/scenarioPredictions.r pkg/FactorAnalytics/R/scenarioPredictionsPortfolio.r pkg/FactorAnalytics/R/style.QPfit.R pkg/FactorAnalytics/R/style.fit.R pkg/FactorAnalytics/R/table.RollingStyle.R Modified: pkg/FactorAnalytics/R/ Log: ignore functions that haven't reviewed yet. Property changes on: pkg/FactorAnalytics/R ___________________________________________________________________ Modified: svn:ignore - bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R dCornishFisher.R plot.MacroFactorModel.r print.MacroFactorModel.r summary.MacroFactorModel.r + FactorAnalytics-package.R bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R dCornishFisher.R factorModelFactorRiskDecomposition.r factorModelGroupRiskDecomposition.r factorModelPerformanceAttribution.r factorModelPortfolioRiskDecomposition.r factorModelRiskAttribution.r factorModelRiskDecomposition.r factorModelSimulation.r impliedFactorReturns.R modifiedEsReport.R modifiedIncrementalES.R modifiedIncrementalVaR.R modifiedPortfolioEsDecomposition.R modifiedPortfolioVaRDecomposition.R modifiedVaRReport.R nonparametricEsReport.R nonparametricIncrementalES.R nonparametricIncrementalVaR.R nonparametricPortfolioEsDecomposition.R nonparametricPortfolioVaRDecomposition.R nonparametricVaRReport.R normalEsReport.R normalIncrementalES.R normalIncrementalVaR.R normalPortfolioEsDecomposition.R normalPortfolioVaRDecomposition.R normalVaRReport.R pCornishFisher.R plot.MacroFactorModel.r print.MacroFactorModel.r qCornishFisher.R rCornishFisher.R scenarioPredictions.r scenarioPredictionsPortfolio.r style.QPfit.R style.fit.R summary.MacroFactorModel.r table.RollingStyle.R Deleted: pkg/FactorAnalytics/R/FactorAnalytics-package.R =================================================================== --- pkg/FactorAnalytics/R/FactorAnalytics-package.R 2013-08-02 00:11:09 UTC (rev 2696) +++ pkg/FactorAnalytics/R/FactorAnalytics-package.R 2013-08-02 00:13:20 UTC (rev 2697) @@ -1,132 +0,0 @@ - - -#' Functions for Cornish-Fisher density, CDF, random number simulation and -#' quantile. -#' -#' \code{dCornishFisher} Computes Cornish-Fisher density from two term -#' Edgeworth expansion given mean, standard deviation, skewness and excess -#' kurtosis. \code{pCornishFisher} Computes Cornish-Fisher CDF from two term -#' Edgeworth expansion given mean, standard deviation, skewness and excess -#' kurtosis. \code{qCornishFisher} Computes Cornish-Fisher quantiles from two -#' term Edgeworth expansion given mean, standard deviation, skewness and excess -#' kurtosis. \code{rCornishFisher} simulate observations based on -#' Cornish-Fisher quantile expansion given mean, standard deviation, skewness -#' and excess kurtosis. -#' -#' CDF(q) = Pr(sqrt(n)*(x_bar-mu)/sigma < q) -#' -#' @aliases rCornishFisher dCornishFisher pCornishFisher qCornishFisher -#' @param n scalar, number of simulated values in rCornishFisher. Sample length -#' in density,distribution,quantile function. -#' @param sigma scalar, standard deviation. -#' @param skew scalar, skewness. -#' @param ekurt scalar, excess kurtosis. -#' @param seed set seed here. Default is \code{NULL}. -#' @param x,q vector of standardized quantiles. See detail. -#' @param p vector of probabilities. -#' @return n simulated values from Cornish-Fisher distribution. -#' @author Eric Zivot and Yi-An Chen. -#' @references A.DasGupta, "Asymptotic Theory of Statistics and Probability", -#' Springer Science+Business Media,LLC 2008 Thomas A.Severini, "Likelihood -#' Methods in Statistics", Oxford University Press, 2000 -#' @examples -#' -#' # generate 1000 observation from Cornish-Fisher distribution -#' rc <- rCornishFisher(1000,1,0,5) -#' hist(rc,breaks=100,freq=FALSE,main="simulation of Cornish Fisher Distribution", -#' xlim=c(-10,10)) -#' lines(seq(-10,10,0.1),dnorm(seq(-10,10,0.1),mean=0,sd=1),col=2) -#' # compare with standard normal curve -#' -#' # example from A.dasGupta p.188 exponential example -#' # x is iid exp(1) distribution, sample size = 5 -#' # then x_bar is Gamma(shape=5,scale=1/5) distribution -#' q <- c(0,0.4,1,2) -#' # exact cdf -#' pgamma(q/sqrt(5)+1,shape=5,scale=1/5) -#' # use CLT -#' pnorm(q) -#' # use edgeworth expansion -#' pCornishFisher(q,n=5,skew=2,ekurt=6) -#' -#' @name CornishFisher -NULL - - - - - -#' Hypothetical Alternative Asset Manager and Benchmark Data -#' -#' a data.frame format from managers dataset from package PerformanceAnalytics, -#' containing columns of monthly returns for six hypothetical asset managers -#' (HAM1 through HAM6), the EDHEC Long-Short Equity hedge fund index, the S\&P -#' 500 total returns. Monthly returns for all series end in December 2006 and -#' begin at different periods starting from January 1997. -#' -#' -#' @name managers.df -#' @docType data -#' @keywords datasets -#' @examples -#' -#' data(managers.df) -#' ## maybe str(managers.df) ; plot(managers.df) ... -#' -NULL - - - - - -#' Monthly Stock Return Data || Portfolio of Weekly Stock Returns -#' -#' sfm.dat: This is a monthly "data.frame" object from January 1978 to December -#' 1987, with seventeen columns representing monthly returns of certain assets, -#' as in Chapter 2 of Berndt (1991). sfm.apca.dat: This is a weekly -#' "data.frame" object with dimension 182 x 1618, which runs from January 8, -#' 1997 to June 28, 2000 and represents the stock returns on 1618 U.S. stocks. -#' -#' CITCRP monthly returns of Citicorp. CONED monthly returns of Consolidated -#' Edison. CONTIL monthly returns of Continental Illinois. DATGEN monthly -#' returns of Data General. DEC monthly returns of Digital Equipment Company. -#' DELTA monthly returns of Delta Airlines. GENMIL monthly returns of General -#' Mills. GERBER monthly returns of Gerber. IBM monthly returns of -#' International Business Machines. MARKET a value-weighted composite monthly -#' returns based on transactions from the New York Stock Exchange and the -#' American Exchange. MOBIL monthly returns of Mobile. PANAM monthly returns -#' of Pan American Airways. PSNH monthly returns of Public Service of New -#' Hampshire. TANDY monthly returns of Tandy. TEXACO monthly returns of -#' Texaco. WEYER monthly returns of Weyerhauser. RKFREE monthly returns on -#' 30-day U.S. Treasury bills. -#' -#' @name stat.fm.data -#' @aliases sfm.dat sfm.apca.dat -#' @docType data -#' @references Berndt, E. R. (1991). The Practice of Econometrics: Classic and -#' Contemporary. Addison-Wesley Publishing Co. -#' @source S+FinMetrics Berndt.dat & folio.dat -#' @keywords datasets -NULL - - - - - -#' constructed NYSE 447 assets from 1996-01-01 through 2003-12-31. -#' -#' constructed NYSE 447 assets from 1996-01-01 through 2003-12-31. -#' -#' Continuous data: PRICE, RETURN, VOLUME, SHARES.OUT, MARKET.EQUITY,LTDEBT, -#' NET.SALES, COMMON.EQUITY, NET.INCOME, STOCKHOLDERS.EQUITY, LOG.MARKETCAP, -#' LOG.PRICE, BOOK2MARKET Categorical data: GICS, GICS.INDUSTRY, GICS.SECTOR -#' Identi cation data: DATE, PERMNO, TICKER.x -#' -#' @name stock -#' @docType data -#' @references Guy Yullen and Yi-An Chen -#' @keywords datasets -NULL - - - Deleted: pkg/FactorAnalytics/R/factorModelFactorRiskDecomposition.r =================================================================== --- pkg/FactorAnalytics/R/factorModelFactorRiskDecomposition.r 2013-08-02 00:11:09 UTC (rev 2696) +++ pkg/FactorAnalytics/R/factorModelFactorRiskDecomposition.r 2013-08-02 00:13:20 UTC (rev 2697) @@ -1,53 +0,0 @@ -## factorModelFactorRiskDecomposition.r -## -## purpose: Compute factor model factor risk (sd) decomposition for individual -## fund -## author: Eric Zivot -## created: August 13, 2009 -## revision history: -## July 1, 2010 -## Added comment to inputs -## June 8, 2010 -## Added percent contribution to risk as output - -factorModelFactorRiskDecomposition <- function(beta.vec, factor.cov, sig2.e) { - ## Inputs: - ## beta k x 1 vector of factor betas with factor names in the rownames - ## factor.cov k x k factor excess return covariance matrix - ## sig2.e scalar, residual variance from factor model - ## Output: - ## A list with the following components: - ## sd.fm scalar, std dev based on factor model - ## mcr.fm k+1 x 1 vector of factor marginal contributions to risk (sd) - ## cr.fm k+1 x 1 vector of factor component contributions to risk (sd) - ## pcr.fm k+1 x 1 vector of factor percent contributions to risk (sd) - ## Remarks: - ## The factor model has the form - ## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) - ## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' - ## By Euler's theorem - ## sd.fm = sum(cr.fm) = sum(beta*mcr.fm) - beta.names = c(rownames(beta.vec), "residual") - beta.vec = as.vector(beta.vec) - beta.star.vec = c(beta.vec, sqrt(sig2.e)) - names(beta.star.vec) = beta.names - factor.cov = as.matrix(factor.cov) - k.star = length(beta.star.vec) - k = k.star - 1 - factor.star.cov = diag(k.star) - factor.star.cov[1:k, 1:k] = factor.cov - - ## compute factor model sd - sd.fm = as.numeric(sqrt(t(beta.star.vec) %*% factor.star.cov %*% beta.star.vec)) - ## compute marginal and component contributions to sd - mcr.fm = (factor.star.cov %*% beta.star.vec)/sd.fm - cr.fm = mcr.fm * beta.star.vec - pcr.fm = cr.fm/sd.fm - rownames(mcr.fm) <- rownames(cr.fm) <- rownames(pcr.fm) <- beta.names - ## return results - ans = list(sd.fm = sd.fm, - mcr.fm = mcr.fm, - cr.fm = cr.fm, - pcr.fm = pcr.fm) - return(ans) -} Deleted: pkg/FactorAnalytics/R/factorModelGroupRiskDecomposition.r =================================================================== --- pkg/FactorAnalytics/R/factorModelGroupRiskDecomposition.r 2013-08-02 00:11:09 UTC (rev 2696) +++ pkg/FactorAnalytics/R/factorModelGroupRiskDecomposition.r 2013-08-02 00:13:20 UTC (rev 2697) @@ -1,78 +0,0 @@ -## factorModelGroupRiskDecomposition.r -## -## purpose: Compute factor model risk decomposition for individual fund by risk groups -## Risk groups are equity, rates, credit, fx, commondity, strategy -## -## author: Eric Zivot -## created: July 9, 2009 -## revised: July 9, 2009 - -factorModelGroupRiskDecomposition <- function(beta.vec, factor.cov, sig2.e, - equityIds, ratesIds, creditIds, - fxIds, cmdtyIds, strategyIds) { -## Inputs: -## beta k x 1 vector of factor betas -## factor.cov k x k factor excess return covariance matrix -## sig2.e scalar, residual variance from factor model -## equityIds k1 x 1 vector of equity factor Ids -## ratesIds k2 x 1 vector of rates factor Ids -## creditIds k3 x 1 vector of credit factor Ids -## fxIds k4 x 1 vector of fx factor Ids -## cmdtyIds k5 x 1 vector of commodity factor Ids -## strategyIds k6 x 1 vector of strategy (blind) factor Ids -## -## Output: -## A list with the following components: -## var.fm scalar, variance based on factor model -## var.systematic scalar, variance contribution due to factors -## var.specific scalar, residual variance contribution -## var.cov scalar, variance contribution due to covariances between factor groups -## var.equity scalar, variance contribution due to equity factors -## var.rates scalar, variance contribution due to rates factors -## var.credit scalar, variance contribution due to credit factors -## var.fx scalar, variance contribution due to fx factors -## var.cmdty scalar, variance contribution due to commodity factors -## var.strategy scalar, variance contribution due to strategy (pca) factors -## Remarks: -## k1 + ... + k6 = k -## var.fm = var.systematic + var.specific = sum(var.factors) + var.cov + var.specific - - beta.vec = as.matrix(beta.vec) - n.beta = length(beta.vec) - n.factors = length(c(equityIds, ratesIds, creditIds, fxIds, cmdtyIds, strategyIds)) - if (n.beta != n.factors) - stop("Number of supplied factor Ids is not equal to number of betas") - factor.cov = as.matrix(factor.cov) - -## compute factor model variance - var.systematic = t(beta.vec) %*% factor.cov %*% beta.vec - var.fm = var.systematic + sig2.e - -## compute factor model variance contributions - var.equity = t(beta.vec[equityIds,]) %*% factor.cov[equityIds,equityIds] %*% beta.vec[equityIds,] - var.rates = t(beta.vec[ratesIds,]) %*% factor.cov[ratesIds,ratesIds] %*% beta.vec[ratesIds,] - var.credit = t(beta.vec[creditIds,]) %*% factor.cov[creditIds,creditIds] %*% beta.vec[creditIds,] - var.fx = t(beta.vec[fxIds,]) %*% factor.cov[fxIds,fxIds] %*% beta.vec[fxIds,] - var.cmdty = t(beta.vec[cmdtyIds,]) %*% factor.cov[cmdtyIds,cmdtyIds] %*% beta.vec[cmdtyIds,] - if (!is.null(strategyIds)) { - var.strategy = t(beta.vec[strategyIds,]) %*% factor.cov[strategyIds,strategyIds] %*% beta.vec[strategyIds,] - } else { - var.strategy = 0 - } - -# compute covariance contribution - var.cov = var.systematic - (var.equity + var.rates + var.credit + var.fx + var.cmdty + var.strategy) - -## return results - ans = list(var.fm=as.numeric(var.fm), - var.systematic=as.numeric(var.systematic), - var.specific=sig2.e, - var.cov=as.numeric(var.cov), - var.equity=as.numeric(var.equity), - var.rates=as.numeric(var.rates), - var.credit=as.numeric(var.credit), - var.fx=as.numeric(var.fx), - var.cmdty=as.numeric(var.cmdty), - var.strategy=var.strategy) - return(ans) -} Deleted: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r =================================================================== --- pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-02 00:11:09 UTC (rev 2696) +++ pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-02 00:13:20 UTC (rev 2697) @@ -1,292 +0,0 @@ -# performance attribution -# Yi-An Chen -# July 30, 2012 - - - -#' Compute BARRA-type performance attribution -#' -#' Decompose total returns or active returns into returns attributed to factors -#' and specific returns. Class of FM.attribution is generated and generic -#' function \code{plot} and \code{summary} can be used. -#' -#' total returns can be decomposed into returns attributed to factors and -#' specific returns. \eqn{R_t = \sum_j e_{jt} * f_{jt} + -#' u_t},t=1..T,\eqn{e_{jt}} is exposure to factor j and \eqn{f_{jt}} is factor -#' j. The returns attributed to factor j is \eqn{e_{jt} * f_{jt}} and portfolio -#' specific returns is \eqn{u_t} -#' -#' @param fit Class of "MacroFactorModel", "FundamentalFactorModel" or -#' "statFactorModel". -#' @param benchmark a zoo, vector or data.frame provides benchmark time series -#' returns. -#' @param ... Other controled variables for fit methods. -#' @return an object of class \code{FM.attribution} containing -#' @returnItem cum.ret.attr.f N X J matrix of cumulative return attributed to -#' factors. -#' @returnItem cum.spec.ret 1 x N vector of cumulative specific returns. -#' @returnItem attr.list list of time series of attributed returns for every -#' portfolio. -#' @author Yi-An Chen. -#' @references Grinold,R and Kahn R, \emph{Active Portfolio Management}, -#' McGraw-Hill. -#' @examples -#' -#' \dontrun{ -#' data(managers.df) -#' ret.assets = managers.df[,(1:6)] -#' factors = managers.df[,(7:9)] -#' fit.macro <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", factor.set = 3, -#' variable.selection="all subsets",decay.factor = 0.95) -#' # withoud benchmark -#' fm.attr <- factorModelPerformanceAttribution(fit.macro) -#' -#' } -#' -factorModelPerformanceAttribution <- - function(fit,benchmark=NULL,...) { - - # input - # fit : Class of MacroFactorModel, FundamentalFactorModel and statFactorModel - # benchmark: benchmark returns, default is NULL. If benchmark is provided, active returns - # is used. - # ... : controlled variables for fitMacroeconomicsFactorModel and fitStatisticalFactorModel - # output - # class of "FMattribution" - # - # plot.FMattribution - # summary.FMattribution - # print.FMattribution - require(zoo) - - if (class(fit) !="MacroFactorModel" & class(fit) !="FundamentalFactorModel" - & class(fit) != "StatFactorModel") - { - stop("Class has to be MacroFactorModel.") - } - - # MacroFactorModel chunk - - if (class(fit) == "MacroFactorModel") { - - # if benchmark is provided - - if (!is.null(benchmark)) { - ret.assets = fit$ret.assets - benchmark - fit = fitMacroeconomicFactorModel(ret.assets=ret.assets,...) - } -# return attributed to factors - cum.attr.ret <- fit$beta.mat - cum.spec.ret <- fit$alpha.vec - factorName = colnames(fit$beta.mat) - fundName = rownames(fit$beta.mat) - - attr.list <- list() - - for (k in fundName) { - fit.lm = fit$asset.fit[[k]] - - ## extract information from lm object - date <- names(fitted(fit.lm)) - - actual.z = zoo(fit.lm$model[1], as.Date(date)) - - -# attributed returns -# active portfolio management p.512 17A.9 - - cum.ret <- Return.cumulative(actual.z) - # setup initial value - attr.ret.z.all <- zoo(, as.Date(date)) - for ( i in factorName ) { - - if (fit$beta.mat[k,i]==0) { - cum.attr.ret[k,i] <- 0 - attr.ret.z.all <- merge(attr.ret.z.all,zoo(rep(0,length(date)),as.Date(date))) - } else { - attr.ret.z <- actual.z - zoo(as.matrix(fit.lm$model[i])%*%as.matrix(fit.lm$coef[i]), - as.Date(date)) - cum.attr.ret[k,i] <- cum.ret - Return.cumulative(actual.z-attr.ret.z) - attr.ret.z.all <- merge(attr.ret.z.all,attr.ret.z) - } - - } - - # specific returns - spec.ret.z <- actual.z - zoo(as.matrix(fit.lm$model[,-1])%*%as.matrix(fit.lm$coef[-1]), - as.Date(date)) - cum.spec.ret[k] <- cum.ret - Return.cumulative(actual.z-spec.ret.z) - attr.list[[k]] <- merge(attr.ret.z.all,spec.ret.z) - colnames(attr.list[[k]]) <- c(factorName,"specific.returns") - } - - - } - -if (class(fit) =="FundamentalFactorModel" ) { - # if benchmark is provided - - if (!is.null(benchmark)) { - stop("use fitFundamentalFactorModel instead") - } - # return attributed to factors - factor.returns <- fit$factor.rets[,-1] - factor.names <- names(fit$factor.rets[,-1]) - dates <- as.character(unique(fit$exposure.data[,"DATE"])) - exposure <- fund.fit$exposure.data - ticker <- names(fit$resids) - - N <- length(ticker) - J <- length(factor.names) - t <- length(dates) - # array arranges in N X J X t - # N is assets names, J is factors, t is time - attr.ret <- array(,dim=c(N,J,t),dimnames=list(ticker,factor.names,dates)) - for (i in dates) { - idx = which(exposure[,"DATE"]==i) - for (j in factor.names) { - attr.ret[,j,i] <- exposure[idx,j]*coredata(factor.returns[as.Date(i)])[,j] - } - } - - # specific returns - # zoo class - intercept <- fit$factor.rets[,1] - resids <- fit$resids - spec.ret.z <- resids + intercept - - #cumulative return attributed to factors - cum.attr.ret <- matrix(,nrow=length(ticker),ncol=length(factor.names), - dimnames=list(ticker,factor.names)) - cum.spec.ret <- rep(0,length(ticker)) - names(cum.spec.ret) <- ticker - - # arrange returns data - actual <- fund.fit$returns.data - # N <- length(assets.names) - # t <- length(dates) - # array arranges in N X t - # N is assets names, J is factors, t is time - actual.ret <- array(,dim=c(N,t),dimnames=list(ticker,dates)) - for (i in dates) { - idx = which(actual[,"DATE"]==i) - actual.ret[,i] <- actual[idx,"RETURN"] - } - - # make returns as zoo - actual.z.all <- zoo(,as.Date(dates)) - for (k in ticker) { - actual.z <- zoo(actual.ret[k,],as.Date(dates)) - actual.z.all <- merge(actual.z.all,actual.z) - } - colnames(actual.z.all) <- ticker - - - - # make list of every asstes and every list contains return attributed to factors - # and specific returns - attr.list <- list() - for (k in ticker){ - attr.ret.z.all <- zoo(,as.Date(dates)) - # cumulative returns - cum.ret <- Return.cumulative(actual.z.all[,k]) - for (j in factor.names) { - attr.ret.z <- zoo(attr.ret[k,j,],as.Date(dates) ) - attr.ret.z.all <- merge(attr.ret.z.all,attr.ret.z) - cum.attr.ret[k,j] <- cum.ret - Return.cumulative(actual.z.all[,k]-attr.ret.z) - } - attr.list[[k]] <- merge(attr.ret.z.all,spec.ret.z[,k]) - colnames(attr.list[[k]]) <- c(factor.names,"specific.returns") - cum.spec.ret[k] <- cum.ret - Return.cumulative(actual.z.all[,k]-spec.ret.z[,k]) - - } - -} - - if (class(fit) == "StatFactorModel") { - - # if benchmark is provided - - if (!is.null(benchmark)) { - x = fit$asset.ret - benchmark - fit = fitStatisticalFactorModel(x=x,...) - } - # return attributed to factors - cum.attr.ret <- t(fit$loadings) - cum.spec.ret <- fit$r2 - factorName = rownames(fit$loadings) - fundName = colnames(fit$loadings) - - # create list for attribution - attr.list <- list() - # pca method - - if ( dim(fit$asset.ret)[1] > dim(fit$asset.ret)[2] ) { - - - for (k in fundName) { - fit.lm = fit$asset.fit[[k]] - - ## extract information from lm object - date <- names(fitted(fit.lm)) - # probably needs more general Date setting - actual.z = zoo(fit.lm$model[1], as.Date(date)) - - - # attributed returns - # active portfolio management p.512 17A.9 - - cum.ret <- Return.cumulative(actual.z) - # setup initial value - attr.ret.z.all <- zoo(, as.Date(date)) - for ( i in factorName ) { - - attr.ret.z <- actual.z - zoo(as.matrix(fit.lm$model[i])%*%as.matrix(fit.lm$coef[i]), - as.Date(date)) - cum.attr.ret[k,i] <- cum.ret - Return.cumulative(actual.z-attr.ret.z) - attr.ret.z.all <- merge(attr.ret.z.all,attr.ret.z) - - - } - - # specific returns - spec.ret.z <- actual.z - zoo(as.matrix(fit.lm$model[,-1])%*%as.matrix(fit.lm$coef[-1]), - as.Date(date)) - cum.spec.ret[k] <- cum.ret - Return.cumulative(actual.z-spec.ret.z) - attr.list[[k]] <- merge(attr.ret.z.all,spec.ret.z) - colnames(attr.list[[k]]) <- c(factorName,"specific.returns") - } - } else { - # apca method -# fit$loadings # f X K -# fit$factors # T X f - - dates <- rownames(fit$factors) - for ( k in fundName) { - attr.ret.z.all <- zoo(, as.Date(date)) - actual.z <- zoo(fit$asset.ret[,k],as.Date(dates)) - cum.ret <- Return.cumulative(actual.z) - for (i in factorName) { - attr.ret.z <- zoo(fit$factors[,i] * fit$loadings[i,k], as.Date(dates) ) - attr.ret.z.all <- merge(attr.ret.z.all,attr.ret.z) - cum.attr.ret[k,i] <- cum.ret - Return.cumulative(actual.z-attr.ret.z) - } - spec.ret.z <- actual.z - zoo(fit$factors%*%fit$loadings[,k],as.Date(dates)) - cum.spec.ret[k] <- cum.ret - Return.cumulative(actual.z-spec.ret.z) - attr.list[[k]] <- merge(attr.ret.z.all,spec.ret.z) - colnames(attr.list[[k]]) <- c(factorName,"specific.returns") - } - - - } - - } - - - - ans = list(cum.ret.attr.f=cum.attr.ret, - cum.spec.ret=cum.spec.ret, - attr.list=attr.list) -class(ans) = "FM.attribution" -return(ans) - } Deleted: pkg/FactorAnalytics/R/factorModelPortfolioRiskDecomposition.r =================================================================== --- pkg/FactorAnalytics/R/factorModelPortfolioRiskDecomposition.r 2013-08-02 00:11:09 UTC (rev 2696) +++ pkg/FactorAnalytics/R/factorModelPortfolioRiskDecomposition.r 2013-08-02 00:13:20 UTC (rev 2697) @@ -1,74 +0,0 @@ -## factorModelPortfolioRiskDecomposition.r -## -## purpose: Compute factor model sd (risk) decomposition for portfolio -## author: Eric Zivot -## created: January 21, 2009 -## revised: January 28, 2009 -## references: -## Qian, Hua and Sorensen (2007) Quantitative Equity Portfolio Management, -## chapter 3. - -factorModelPortfolioRiskDecomposition <- function(w.vec, beta.mat, factor.cov, sig2.e) { -## Inputs: -## w.vec n x 1 vector of portfolio weights -## beta.mat n x k matrix of factor betas -## factor.cov k x k factor excess return covariance matrix -## sig2.e n x 1 vector of residual variances from factor model -## Output: -## cov.fm n x n excess return covariance matrxi based on -## estimated factor model -## Output: -## A list with the following components: -## var.p scalar, portfolio variance based on factor model -## var.p.systematic scalar, portfolio variance due to factors -## var.p.specific scalar, portfolio variance not explanied by factors -## var.p.cov scalar, portfolio variance due to covariance terms -## var.p.factors k x 1 vector, portfolio variances due to factors -## mcr.p n x 1 vector, marginal contributions to portfolio total risk -## mcr.p.systematic n x 1 vector, marginal contributions to portfolio systematic risk -## mcr.p.specific n x 1 vector, marginal contributions to portfolio specific risk -## pcr.p n x 1 vector, percent contribution to portfolio total risk - beta.mat = as.matrix(beta.mat) - factor.cov = as.matrix(factor.cov) - sig2.e = as.vector(sig2.e) - if (length(sig2.e) > 1) { - D.e = diag(as.vector(sig2.e)) - } else { - D.e = as.matrix(sig2.e) - } - if (ncol(beta.mat) != ncol(factor.cov)) - stop("beta.mat and factor.cov must have same number of columns") - if (nrow(D.e) != nrow(beta.mat)) - stop("beta.mat and D.e must have same number of rows") - ## compute factor model covariance matrix - cov.systematic = beta.mat %*% factor.cov %*% t(beta.mat) - cov.fm = cov.systematic + D.e - if (any(diag(chol(cov.fm)) == 0)) - warning("Covariance matrix is not positive definite") - ## compute portfolio level variance - var.p = as.numeric(t(w.vec) %*% cov.fm %*% w.vec) - var.p.systematic = as.numeric(t(w.vec) %*% cov.systematic %*% w.vec) - var.p.specific = as.numeric(t(w.vec) %*% D.e %*% w.vec) - beta.p = crossprod(w.vec, beta.mat) - var.p.factors = beta.p^2 * diag(factor.cov) - var.p.cov = var.p.systematic - sum(var.p.factors) - - - ## compute marginal contributions to risk - mcr.p = (cov.systematic %*% w.vec + D.e %*% w.vec)/sqrt(var.p) - mcr.p.systematic = (cov.systematic %*% w.vec + D.e %*% w.vec)/sqrt(var.p.systematic) - mcr.p.specific = (D.e %*% w.vec)/sqrt(var.p.specific) - ## compute percentage risk contribution - pcr.p = (w.vec * mcr.p)/sqrt(var.p) - ## return results - ans = list(var.p=var.p, - var.p.systematic=var.p.systematic, - var.p.specific=var.p.specific, - var.p.cov=var.p.cov, - var.p.factors=var.p.factors, - mcr.p=mcr.p, - mcr.p.systematic=mcr.p.systematic, - mcr.p.specific=mcr.p.specific, - pcr.p=pcr.p) - return(ans) -} \ No newline at end of file Deleted: pkg/FactorAnalytics/R/factorModelRiskAttribution.r =================================================================== --- pkg/FactorAnalytics/R/factorModelRiskAttribution.r 2013-08-02 00:11:09 UTC (rev 2696) +++ pkg/FactorAnalytics/R/factorModelRiskAttribution.r 2013-08-02 00:13:20 UTC (rev 2697) @@ -1,59 +0,0 @@ -# Yi-An Chen -# July 5, 2012 - -factorModelRiskAttribution <- - function(fit) { - class = class(fit,benchmark,start,end,Full.sample=TRUE) - # input - # class: Class has to be either MacroFactorModel, FundmentalFactorModel - # or StatFactorModel - # benchmark: benchmark returns, default is equally weighted portfolio - # start : Start of trailling period. - # end : End of trailling period. - # Full.sample: Is full sample included in the analysis. Default is TRUE. - - # output - # class of "FMattribution" - # - # plot.FMattribution - # summary.FMattribution - # print.FMattribution - if (class !="MacroFactorModel" && class !="FundmentalFactorModel" - && class != "StatFactorModel") - { - stop("Class has to be either MacroFactorModel, FundmentalFactorModel - or StatFactorModel") - } - # get portfolio names and factor names - manager.names = colnames(fit.macro$ret.assets) - factor.names = colnames(fit.macro$factors) - - # beginning of switching - switch(class, - MacroFactorModel={ - for (i in manager.names) { - - total.ret = fit$ret.assets - total.sd = sd(total) - active.ret = benchmark - fit$ret.assets - active.sd = ad(active) - expected.active.ret = beta.mat%*%fit.macro$factors - benchmark - exceptional.active.ret = active- expected.active.ret - Market.timing - Risk.indexes - Industries - Asset.selection - Trading - Transaction.cost - } - - }, - FundmentalFactorModel={ - print("test 2") - }, - StatFactorModel={ - - } - - ) - } \ No newline at end of file Deleted: pkg/FactorAnalytics/R/factorModelRiskDecomposition.r =================================================================== --- pkg/FactorAnalytics/R/factorModelRiskDecomposition.r 2013-08-02 00:11:09 UTC (rev 2696) +++ pkg/FactorAnalytics/R/factorModelRiskDecomposition.r 2013-08-02 00:13:20 UTC (rev 2697) @@ -1,44 +0,0 @@ -## factorModelRiskDecomposition.r -## -## purpose: Compute factor model risk decomposition for individual fund -## author: Eric Zivot -## created: January 21, 2009 -## revised: January 28, 2009 - -factorModelRiskDecomposition <- function(beta.vec, factor.cov, sig2.e) { -## Inputs: -## beta k x 1 vector of factor betas -## factor.cov k x k factor excess return covariance matrix -## sig2.e scalar, residual variance from factor model -## Output: -## cov.fm n x n excess return covariance matrix based on -## estimated factor model -## Output: -## A list with the following components: -## var.fm scalar, variance based on factor model -## var.systematic scalar, variance due to factors -# var.specific scalar, residual variance -## var.cov scalar, variance due to covariance contribution -## var.factors k x 1 vector of variances due to factors -## Remarks: -## var.fm = var.systematic + var.specific = sum(var.factors) + var.cov + var.specific - - beta.vec = as.vector(beta.vec) - factor.cov = as.matrix(factor.cov) - -## compute factor model variance - var.systematic = t(beta.vec) %*% factor.cov %*% beta.vec - var.fm = var.systematic + sig2.e - -## compute factor model variance contributions - var.factors = beta.vec^2 * diag(factor.cov) - var.cov = var.systematic - sum(var.factors) - -## return results - ans = list(var.fm=as.numeric(var.fm), - var.systematic=as.numeric(var.systematic), - var.specific=sig2.e, - var.cov=as.numeric(var.cov), - var.factors=var.factors) - return(ans) -} Deleted: pkg/FactorAnalytics/R/factorModelSimulation.r =================================================================== --- pkg/FactorAnalytics/R/factorModelSimulation.r 2013-08-02 00:11:09 UTC (rev 2696) +++ pkg/FactorAnalytics/R/factorModelSimulation.r 2013-08-02 00:13:20 UTC (rev 2697) @@ -1,74 +0,0 @@ - -factorModelSimulation <- function(n.sim=5000, factorBetas, factorData, residualMoments, - residual.dist = c("normal","Cornish-Fisher", "skew-t")) { - ## Simulate performance for specified funds from fitted factor models - ## Simulations are computed using a semi-parametric methodology. Factor performance - ## is simulated by bootstrapping with replacement from historical performance, and - ## a fitted factor model return conditional on factor performance is computed by - ## applying estimated factor model coefficients to the bootstrapped factor performance. - ## Residuals are simulated parametrically from one of three distributions: normal - ## distribution, Cornish-Fisher distribution, skew-t distribution. Unconditional - ## performance is then computed by adding the risidual simulations to the conditional - ## performance. - ## inputs: - ## n.sim scalar, number of simulations - ## factorBetas n.fund x k matrix of factor model betas for n.fund funds - ## factorData n x k matrix of historical factor performance - ## residualMoments n.fund x 7 vector of residual moments with columns sigma, skew, - ## ekurt, location, scale, shape, df - ## residual.dist character value indicating the residual distribution. Valid choices - ## are "normal" for normal distribution; "Cornish-Fisher" for - ## Cornish-Fisher distribution; "skew-t' for skew-t distribution. - ## output: - ## n.sim x n.fund matrix of simulated performance. - ## Remarks: - ## 1. The factor model has the form - ## R(t) = beta'F(t) + e(t), e(t) ~ D(0,theta) - ## where beta = fitted factor model parameters, F(t) = risk factors, e(t) = residuals, - ## and theta = parameters of residual distribution D. If D = normal, then theta = [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2697 From noreply at r-forge.r-project.org Fri Aug 2 02:57:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 02:57:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2698 - in pkg/FactorAnalytics: . R Message-ID: <20130802005710.7A14D18558E@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 02:57:09 +0200 (Fri, 02 Aug 2013) New Revision: 2698 Modified: pkg/FactorAnalytics/DESCRIPTION pkg/FactorAnalytics/R/fitStatisticalFactorModel.R pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r Log: debug Modified: pkg/FactorAnalytics/DESCRIPTION =================================================================== --- pkg/FactorAnalytics/DESCRIPTION 2013-08-02 00:13:20 UTC (rev 2697) +++ pkg/FactorAnalytics/DESCRIPTION 2013-08-02 00:57:09 UTC (rev 2698) @@ -7,5 +7,5 @@ Maintainer: Yi-An Chen Description: An R package for estimation and risk analysis of linear factor models for asset returns and portfolios. It contains three major fitting method for the factor models: fitting macroeconomic factor model, fitting fundamental factor model and fitting statistical factor model and some risk analysis tools like VaR, ES to use the result of the fitting method. It also provides the different type of distribution to fit the fat-tail behavior of the financial returns, including edgeworth expansion type distribution. License: GPL-2 -Depends: robust, robustbase, leaps, lars, zoo, MASS, PerformanceAnalytics, ff, sn, tseries, strucchange +Depends: robust, robustbase, leaps, lars, zoo, MASS, PerformanceAnalytics, ff, sn, tseries, strucchange,xts,ellipse LazyLoad: yes \ No newline at end of file Modified: pkg/FactorAnalytics/R/fitStatisticalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitStatisticalFactorModel.R 2013-08-02 00:13:20 UTC (rev 2697) +++ pkg/FactorAnalytics/R/fitStatisticalFactorModel.R 2013-08-02 00:57:09 UTC (rev 2698) @@ -209,7 +209,7 @@ if(is.null(ret.cov)) { ret.cov <- crossprod(xc)/m } - eigen.tmp <- eigen(ret.cov, symm = TRUE) + eigen.tmp <- eigen(ret.cov, symmetric = TRUE) # compute loadings beta B <- t(eigen.tmp$vectors[, 1:k, drop = FALSE]) # compute estimated factors @@ -288,7 +288,7 @@ if(refine) { xs <- t(xc)/sqrt(sigma) ret.cov <- crossprod(xs)/n - eig.tmp <- eigen(ret.cov, symm = TRUE) + eig.tmp <- eigen(ret.cov, symmetric = TRUE) f <- eig.tmp$vectors[, 1:k, drop = FALSE] f1 <- cbind(1, f) B <- backsolve(chol(crossprod(f1)), diag(k + 1)) Modified: pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-02 00:13:20 UTC (rev 2697) +++ pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-02 00:57:09 UTC (rev 2698) @@ -176,7 +176,7 @@ # sum weigth to unitary w <- w/sum(w) fm.formula = as.formula(paste(i,"~", ".", sep="")) - fm.fit = lm(fm.formula, data=reg.df,weight=w) + fm.fit = lm(fm.formula, data=reg.df,weights=w) fm.summary = summary(fm.fit) reg.list[[i]] = fm.fit Alphas[i] = coef(fm.fit)[1] @@ -301,7 +301,7 @@ # sum weigth to unitary w <- w/sum(w) fm.formula = as.formula(paste(i,"~", ".", sep="")) - fm.fit = lm(fm.formula, data=reg.df,weight=w) + fm.fit = lm(fm.formula, data=reg.df,weights=w) fm.summary = summary(fm.fit) reg.list[[i]] = fm.fit Alphas[i] = coef(fm.fit)[1] @@ -333,7 +333,7 @@ reg.df = merge(reg.df,quadratic.term) colnames(reg.df)[dim(reg.df)[2]] <- "quadratic.term" } - fm.fit = lm(fm.formula, data=reg.df,weight=w) + fm.fit = lm(fm.formula, data=reg.df,weights=w) fm.summary = summary(fm.fit) reg.list[[i]] = fm.fit Alphas[i] = coef(fm.fit)[1] @@ -425,7 +425,7 @@ # sum weigth to unitary w <- w/sum(w) fm.formula = as.formula(paste(i,"~", ".", sep="")) - fm.fit = step(lm(fm.formula, data=reg.df,weight=w),trace=0) + fm.fit = step(lm(fm.formula, data=reg.df,weights=w),trace=0) fm.summary = summary(fm.fit) reg.list[[i]] = fm.fit Alphas[i] = coef(fm.fit)[1] Modified: pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r 2013-08-02 00:13:20 UTC (rev 2697) +++ pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r 2013-08-02 00:57:09 UTC (rev 2698) @@ -197,7 +197,7 @@ } w <- w/sum(w) rollReg <- function(data.z, formula,w) { - coef(lm(formula,weight=w, data = as.data.frame(data.z))) + coef(lm(formula,weights=w, data = as.data.frame(data.z))) } reg.z = zoo(fit.lm$model[-length(fit.lm$model)], as.Date(rownames(fit.lm$model))) factorNames = colnames(fit.lm$model)[c(-1,-length(fit.lm$model))] From noreply at r-forge.r-project.org Fri Aug 2 14:15:36 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 14:15:36 +0200 (CEST) Subject: [Returnanalytics-commits] r2699 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130802121536.B37A2184CDB@r-forge.r-project.org> Author: pulkit Date: 2013-08-02 14:15:36 +0200 (Fri, 02 Aug 2013) New Revision: 2699 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R Log: CDaR Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-02 00:57:09 UTC (rev 2698) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-02 12:15:36 UTC (rev 2699) @@ -22,15 +22,126 @@ #' #'@param R an xts, vector, matrix,data frame, timeSeries or zoo object of multiple sample path returns #'@param ps the probability for each sample path +#'@param scen the number of scenarios in the Return series +#'@param instr the number of instruments in the Return series +#'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic +#'chaining (FALSE) to aggregate returns, default TRUE #'@param p confidence level for calculation ,default(p=0.95) #'@param \dots any other passthru parameters #' #'@references -#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) with Drawdown Measure. -#'Research Report 2012-9, ISE Dept., University of Florida, September 2012 +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) +#' with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, +#' September 2012 - -CdarMultiPath<-function(){ - -} \ No newline at end of file + CdarMultiPath<-function (R,ps,sample,instr, geometric = TRUE,p = 0.95, ...) + { + #p = .setalphaprob(p) + if (is.vector(R) || ncol(R) == 1) { + R = na.omit(R) + nr = nrow(R) + # checking if nr*p is an integer + if((p*nr) %% 1 == 0){ + drawdowns = as.matrix(Drawdowns(R)) + drawdowns = drawdowns(order(drawdowns),decreasing = TRUE) + # average of the drawdowns greater the (1-alpha).100% largest drawdowns + result = (1/((1-p)*nr(R)))*sum(drawdowns[((1-p)*nr):nr]) + } + else{ # if nr*p is not an integer + #f.obj = c(rep(0,nr),rep((1/(1-p))*(1/nr),nr),1) + + + # The objective function is defined + + for(i in 1:sample){ + for(j in 1:nr){ + f.obj = c(f.obj,ps[i]*drawdowns[i,j]) + } + } + print(f.obj) + + # constraint 1: ps.qst = 1 + for(i in 1:sample){ + for(j in 1:nr){ + f.con = c(f.con,ps[i]) + } + } + f.con = matrix(f.con,nrow =1) + f.dir = "=" + f.rhs = 1 + # constraint 2 : qst >= 0 + for(i in 1:sample){ + for(j in 1:nr){ + r<-rep(0,sample*nr) + r[(i-1)*s+j] = 1 + f.con = rbind(f.con,r) + } + } + f.dir = c(f.dir,">=",sample*nr) + f.rhs = c(f.rhs,rep(0,sample*nr)) + + + # constraint 3 : qst =< 1/(1-alpha)*T + for(i in 1:sample){ + for(j in 1:nr){ + r<-rep(0,sample*nr) + r[(i-1)*s+j] = 1 + f.con = rbind(f.con,r) + } + } + f.dir = c(f.dir,"<=",sample*nr) + f.rhs = c(f.rhs,rep(1/(1-p)*nr,sample*nr)) + + # constraint 1: + # f.con = cbind(-diag(nr),diag(nr),1) + # f.dir = c(rep(">=",nr)) + # f.rhs = c(rep(0,nr)) + + #constatint 2: + # ut = diag(nr) + # ut[-1,-nr] = ut[-1,-nr] - diag(nr - 1) + # f.con = rbind(f.con,cbind(ut,matrix(0,nr,nr),1)) + # f.dir = c(rep(">=",nr)) + # f.rhs = c(f.rhs,-R) + + #constraint 3: + # f.con = rbind(f.con,cbind(matrix(0,nr,nr),diag(nr),1)) + # f.dir = c(rep(">=",nr)) + # f.rhs = c(f.rhs,rep(0,nr)) + + #constraint 4: + # f.con = rbind(f.con,cbind(diag(nr),matrix(0,nr,nr),1)) + # f.dir = c(rep(">=",nr)) + # f.rhs = c(f.rhs,rep(0,nr)) + + val = lp("max",f.obj,f.con,f.dir,f.rhs) + result = val$objval + } + if (invert) + result <- -result + + return(result) + } + else { + R = checkData(R, method = "matrix") + if (is.null(weights)) { + result = matrix(nrow = 1, ncol = ncol(R)) + for (i in 1:ncol(R)) { + result[i] <- CDD(R[, i, drop = FALSE], p = p, + geometric = geometric, invert = invert, ... = ...) + } + dim(result) = c(1, NCOL(R)) + colnames(result) = colnames(R) + rownames(result) = paste("Conditional Drawdown ", + p * 100, "%", sep = "") + } + else { + portret <- Return.portfolio(R, weights = weights, + geometric = geometric) + result <- CDD(portret, p = p, geometric = geometric, + invert = invert, ... = ...) + } + return(result) + } + } \ No newline at end of file Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R 2013-08-02 00:57:09 UTC (rev 2698) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R 2013-08-02 12:15:36 UTC (rev 2699) @@ -1,35 +1,40 @@ -CDD<-function (R, weights = NULL, geometric = TRUE, invert = TRUE, +CDaR<-function (R, weights = NULL, geometric = TRUE, invert = TRUE, p = 0.95, ...) { - alpha = p #p = .setalphaprob(p) if (is.vector(R) || ncol(R) == 1) { R = na.omit(R) nr = nrow(R) # checking if nr*p is an integer if((p*nr) %% 1 == 0){ - drawdowns = as.matrix(Drawdowns(R)) - drawdowns = drawdowns(order(drawdowns),decreasing = TRUE) + drawdowns = Drawdowns(R) + drawdowns = drawdowns[order(drawdowns),decreasing = TRUE] # average of the drawdowns greater the (1-alpha).100% largest drawdowns - result = (1/((1-p)*nr(R)))*sum(drawdowns[((1-p)*nr):nr]) + result = (1/((1-p)*nr))*sum(drawdowns[(p*nr):nr]) } - else{ - f.obj = c(rep(0,nr),rep((1/(1-alpha))*(1/nr),nr),1) - + else{# if nr*p is not an integer + f.obj = c(rep(0,nr),rep((1/(1-p))*(1/nr),nr),1) + + # k varies from 1:nr + + # constraint : zk -uk +y >= 0 f.con = cbind(-diag(nr),diag(nr),1) f.dir = c(rep(">=",nr)) f.rhs = c(rep(0,nr)) + # constraint : uk -uk-1 >= -rk ut = diag(nr) ut[-1,-nr] = ut[-1,-nr] - diag(nr - 1) f.con = rbind(f.con,cbind(ut,matrix(0,nr,nr),1)) f.dir = c(rep(">=",nr)) f.rhs = c(f.rhs,-R) + # constraint : zk >= 0 f.con = rbind(f.con,cbind(matrix(0,nr,nr),diag(nr),1)) f.dir = c(rep(">=",nr)) f.rhs = c(f.rhs,rep(0,nr)) + # constraint : uk >= 0 f.con = rbind(f.con,cbind(diag(nr),matrix(0,nr,nr),1)) f.dir = c(rep(">=",nr)) f.rhs = c(f.rhs,rep(0,nr)) From noreply at r-forge.r-project.org Fri Aug 2 20:01:50 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 20:01:50 +0200 (CEST) Subject: [Returnanalytics-commits] r2700 - in pkg/FactorAnalytics: . R man Message-ID: <20130802180151.0A3391851F6@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 20:01:50 +0200 (Fri, 02 Aug 2013) New Revision: 2700 Modified: pkg/FactorAnalytics/NAMESPACE pkg/FactorAnalytics/R/factorModelMonteCarlo.R pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r pkg/FactorAnalytics/R/plot.StatFactorModel.r pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r pkg/FactorAnalytics/R/predict.StatFactorModel.r pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r pkg/FactorAnalytics/R/print.FundamentalFactorModel.r pkg/FactorAnalytics/R/print.StatFactorModel.r pkg/FactorAnalytics/R/print.TimeSeriesFactorModel.r pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r pkg/FactorAnalytics/R/summary.StatFactorModel.r pkg/FactorAnalytics/R/summary.TimeSeriesFactorModel.r pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/plot.StatFactorModel.Rd pkg/FactorAnalytics/man/plot.TimeSeriesFactorModel.Rd pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/predict.StatFactorModel.Rd pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/print.StatFactorModel.Rd pkg/FactorAnalytics/man/print.TimeSeriesFactorModel.Rd pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/summary.StatFactorModel.Rd pkg/FactorAnalytics/man/summary.TimeSeriesFactorModel.Rd Log: debug generic related function .Rd files. Modified: pkg/FactorAnalytics/NAMESPACE =================================================================== --- pkg/FactorAnalytics/NAMESPACE 2013-08-02 12:15:36 UTC (rev 2699) +++ pkg/FactorAnalytics/NAMESPACE 2013-08-02 18:01:50 UTC (rev 2700) @@ -0,0 +1,14 @@ +export(factorModelMonteCarlo) +export(fitTimeSeriesFactorModel) +S3method(plot,FundamentalFactorModel) +S3method(plot,StatFactorModel) +S3method(plot,TimeSeriesFactorModel) +S3method(predict,FundamentalFactorModel) +S3method(predict,StatFactorModel) +S3method(print,FundamentalFactorModel) +S3method(print,StatFactorModel) +S3method(print,TimeSeriesFactorModel) +S3method(summary,FundamentalFactorModel) +S3method(summary,StatFactorModel) +S3method(summary,TimeSeriesFactorModel) +S3method(TimeSeriesFactorModel) Modified: pkg/FactorAnalytics/R/factorModelMonteCarlo.R =================================================================== --- pkg/FactorAnalytics/R/factorModelMonteCarlo.R 2013-08-02 12:15:36 UTC (rev 2699) +++ pkg/FactorAnalytics/R/factorModelMonteCarlo.R 2013-08-02 18:01:50 UTC (rev 2700) @@ -44,6 +44,7 @@ #' residuals. Returned only if \code{return.residuals = TRUE}. #' @author Eric Zivot and Yi-An Chen. #' @references Jiang, Y. (2009). UW PhD Thesis. +#' @export #' @examples #' #' # load data from the database Modified: pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-02 12:15:36 UTC (rev 2699) +++ pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-02 18:01:50 UTC (rev 2700) @@ -84,6 +84,7 @@ #' chart.TimeSeries(dataToPlot, main="FM fit for HAM1", #' colorset=c("black","blue"), legend.loc="bottomleft") #' } +#' @export fitTimeSeriesFactorModel <- function(assets.names, factors.names, data=data, num.factor.subset = 1, fit.method=c("OLS","DLS","Robust"), Modified: pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r 2013-08-02 12:15:36 UTC (rev 2699) +++ pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r 2013-08-02 18:01:50 UTC (rev 2700) @@ -9,7 +9,7 @@ #' Generic function of plot method for fitFundamentalFactorModel. #' #' -#' @param fit.fund fit object created by fitFundamentalFactorModel. +#' @param x fit object created by fitFundamentalFactorModel. #' @param which.plot integer indicating which plot to create: "none" will #' create a menu to choose. Defualt is none. #' 1 = "Factor returns", @@ -55,9 +55,11 @@ #' #' plot(fit.fund) #' } +#' @method plot FundamentalFactorModel +#' @export #' plot.FundamentalFactorModel <- -function(fit.fund,which.plot=c("none","1L","2L","3L","4L","5L","6L"),max.show=4, +function(x,which.plot=c("none","1L","2L","3L","4L","5L","6L"),max.show=4, plot.single=FALSE, asset.name, which.plot.single=c("none","1L","2L","3L","4L","5L","6L", "7L","8L","9L"),legend.txt=TRUE,...) @@ -67,14 +69,14 @@ if (plot.single == TRUE) { - idx <- fit.fund$data[,fit.fund$assetvar] == asset.name - asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] - dates <- fit.fund$data[idx,fit.fund$datevar] + idx <- x$data[,x$assetvar] == asset.name + asset.ret <- x$data[idx,x$returnsvar] + dates <- x$data[idx,x$datevar] actual.z <- zoo(asset.ret,as.Date(dates)) - residuals.z <- zoo(fit.fund$residuals[,asset.name],as.Date(dates)) + residuals.z <- zoo(x$residuals[,asset.name],as.Date(dates)) fitted.z <- actual.z - residuals.z t <- length(dates) - k <- length(fit.fund$exposure.names) + k <- length(x$exposure.names) which.plot.single<-menu(c("time series plot of actual and fitted values", "time series plot of residuals with standard error bands", @@ -155,7 +157,7 @@ "Factor Contributions to VaR"), title="Factor Analytics Plot \nMake a plot selection (or 0 to exit):\n") - n <- length(fit.fund$asset.names) + n <- length(x$asset.names) if (n >= max.show) { cat(paste("numbers of assets are greater than",max.show,", show only first", max.show,"assets",sep=" ")) @@ -164,42 +166,42 @@ switch(which.plot, "1L" = { - factor.names <- colnames(fit.fund$factor.returns) + factor.names <- colnames(x$factor.returns) # nn <- length(factor.names) par(mfrow=c(n,1)) options(show.error.messages=FALSE) for (i in factor.names[1:n]) { - plot(fit.fund$factor.returns[,i],main=paste(i," Factor Returns",sep="") ) + plot(x$factor.returns[,i],main=paste(i," Factor Returns",sep="") ) } par(mfrow=c(1,1)) }, "2L" ={ par(mfrow=c(n,1)) - names <- colnames(fit.fund$residuals[,1:n]) + names <- colnames(x$residuals[,1:n]) for (i in names) { - plot(fit.fund$residuals[,i],main=paste(i," Residuals", sep="")) + plot(x$residuals[,i],main=paste(i," Residuals", sep="")) } par(mfrow=c(1,1)) }, "3L" = { - barplot(fit.fund$resid.variance[c(1:n)],...) + barplot(x$resid.variance[c(1:n)],...) }, "4L" = { - cor.fm = cov2cor(fit.fund$returns.cov$cov) + cor.fm = cov2cor(x$returns.cov$cov) rownames(cor.fm) = colnames(cor.fm) ord <- order(cor.fm[1,]) ordered.cor.fm <- cor.fm[ord, ord] plotcorr(ordered.cor.fm[c(1:n),c(1:n)], col=cm.colors(11)[5*ordered.cor.fm + 6]) }, "5L" = { - cov.factors = var(fit.fund$factor.returns) - names = fit.fund$asset.names + cov.factors = var(x$factor.returns) + names = x$asset.names factor.sd.decomp.list = list() for (i in names) { factor.sd.decomp.list[[i]] = - factorModelSdDecomposition(fit.fund$beta[i,], - cov.factors, fit.fund$resid.variance[i]) + factorModelSdDecomposition(x$beta[i,], + cov.factors, x$resid.variance[i]) } # function to efit.stattract contribution to sd from list getCSD = function(x) { @@ -207,7 +209,7 @@ } # extract contributions to SD from list cr.sd = sapply(factor.sd.decomp.list, getCSD) - rownames(cr.sd) = c(colnames(fit.fund$factor.returns), "residual") + rownames(cr.sd) = c(colnames(x$factor.returns), "residual") # create stacked barchart # discard intercept barplot(cr.sd[-1,(1:max.show)], main="Factor Contributions to SD", @@ -215,19 +217,19 @@ } , "6L" = { factor.es.decomp.list = list() - names = fit.fund$asset.names + names = x$asset.names for (i in names) { # check for missing values in fund data -# idx = which(!is.na(fit.fund$data[,i])) - idx <- fit.fund$data[,fit.fund$assetvar] == i - asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] - tmpData = cbind(asset.ret, fit.fund$factor.returns, - fit.fund$residuals[,i]/sqrt(fit.fund$resid.variance[i]) ) +# idx = which(!is.na(x$data[,i])) + idx <- x$data[,x$assetvar] == i + asset.ret <- x$data[idx,x$returnsvar] + tmpData = cbind(asset.ret, x$factor.returns, + x$residuals[,i]/sqrt(x$resid.variance[i]) ) colnames(tmpData)[c(1,length(tmpData[1,]))] = c(i, "residual") factor.es.decomp.list[[i]] = factorModelEsDecomposition(tmpData, - fit.fund$beta[i,], - fit.fund$resid.variance[i], tail.prob=0.05) + x$beta[i,], + x$resid.variance[i], tail.prob=0.05) } # stacked bar charts of percent contributions to ES @@ -236,25 +238,25 @@ } # report as positive number cr.etl = sapply(factor.es.decomp.list, getCETL) - rownames(cr.etl) = c(colnames(fit.fund$factor.returns), "residual") + rownames(cr.etl) = c(colnames(x$factor.returns), "residual") barplot(cr.etl[-1,(1:max.show)], main="Factor Contributions to ES", legend.text=legend.txt, args.legend=list(x="topleft"),...) }, "7L" = { factor.VaR.decomp.list = list() - names = fit.fund$asset.names + names = x$asset.names for (i in names) { # check for missing values in fund data - # idx = which(!is.na(fit.fund$data[,i])) - idx <- fit.fund$data[,fit.fund$assetvar] == i - asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] - tmpData = cbind(asset.ret, fit.fund$factor.returns, - fit.fund$residuals[,i]/sqrt(fit.fund$resid.variance[i]) ) + # idx = which(!is.na(x$data[,i])) + idx <- x$data[,x$assetvar] == i + asset.ret <- x$data[idx,x$returnsvar] + tmpData = cbind(asset.ret, x$factor.returns, + x$residuals[,i]/sqrt(x$resid.variance[i]) ) colnames(tmpData)[c(1,length(tmpData[1,]))] = c(i, "residual") factor.VaR.decomp.list[[i]] = factorModelVaRDecomposition(tmpData, - fit.fund$beta[i,], - fit.fund$resid.variance[i], tail.prob=0.05) + x$beta[i,], + x$resid.variance[i], tail.prob=0.05) } @@ -264,7 +266,7 @@ } # report as positive number cr.var = sapply(factor.VaR.decomp.list, getCVaR) - rownames(cr.var) = c(colnames(fit.fund$factor.returns), "residual") + rownames(cr.var) = c(colnames(x$factor.returns), "residual") barplot(cr.var[-1,(1:max.show)], main="Factor Contributions to VaR", legend.text=legend.txt, args.legend=list(x="topleft"),...) }, @@ -273,6 +275,5 @@ } - } Modified: pkg/FactorAnalytics/R/plot.StatFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.StatFactorModel.r 2013-08-02 12:15:36 UTC (rev 2699) +++ pkg/FactorAnalytics/R/plot.StatFactorModel.r 2013-08-02 18:01:50 UTC (rev 2700) @@ -5,7 +5,7 @@ #' #' PCA works well. APCA is underconstruction. #' -#' @param fit.stat fit object created by fitStatisticalFactorModel. +#' @param x fit object created by fitStatisticalFactorModel. #' @param variables Optional. an integer vector telling which variables are to #' be plotted. The default is to plot all the variables, or the number of #' variables explaining 90 percent of the variance, whichever is bigger. @@ -50,9 +50,10 @@ #' # plot single asset #' plot(sfm.pca.fit,plot.single=TRUE,asset.name="CITCRP") #' } -#' +#' @method plot StatFactorModel +#' @export plot.StatFactorModel <- -function(fit.stat, variables, cumulative = TRUE, style = "bar", +function(x, variables, cumulative = TRUE, style = "bar", which.plot = c("none","1L","2L","3L","4L","5L","6L","7L","8L"), hgrid = FALSE, vgrid = FALSE,plot.single=FALSE, asset.name, which.plot.single=c("none","1L","2L","3L","4L","5L","6L", @@ -130,10 +131,10 @@ # pca method - if ( dim(fit.stat$asset.ret)[1] > dim(fit.stat$asset.ret)[2] ) { + if ( dim(x$asset.ret)[1] > dim(x$asset.ret)[2] ) { - fit.lm = fit.stat$asset.fit[[asset.name]] + fit.lm = x$asset.fit[[asset.name]] ## exact information from lm object @@ -248,12 +249,12 @@ ) } else { #apca method - dates <- names(fit.stat$data[,asset.name]) - actual.z <- zoo(fit.stat$asset.ret[,asset.name],as.Date(dates)) - residuals.z <- zoo(fit.stat$residuals,as.Date(dates)) + dates <- names(x$data[,asset.name]) + actual.z <- zoo(x$asset.ret[,asset.name],as.Date(dates)) + residuals.z <- zoo(x$residuals,as.Date(dates)) fitted.z <- actual.z - residuals.z t <- length(dates) - k <- fit.stat$k + k <- x$k which.plot.single<-menu(c("time series plot of actual and fitted values", "time series plot of residuals with standard error bands", @@ -346,12 +347,12 @@ ## 1. screeplot. ## if(missing(variables)) { - vars <- fit.stat$eigen + vars <- x$eigen n90 <- which(cumsum(vars)/ sum(vars) > 0.9)[1] - variables <- 1:max(fit.stat$k, min(10, n90)) + variables <- 1:max(x$k, min(10, n90)) } - screeplot(fit.stat, variables, cumulative, + screeplot(x, variables, cumulative, style, "Screeplot") }, "2L" = { @@ -359,14 +360,14 @@ ## 2. factor returns ## if(missing(variables)) { - f.ret <- fit.stat$factors + f.ret <- x$factors } plot.ts(f.ret) } , "3L" = { - cov.fm<- factorModelCovariance(t(fit.stat$loadings),var(fit.stat$factors), - fit.stat$resid.variance) + cov.fm<- factorModelCovariance(t(x$loadings),var(x$factors), + x$resid.variance) cor.fm = cov2cor(cov.fm) rownames(cor.fm) = colnames(cor.fm) ord <- order(cor.fm[1,]) @@ -374,44 +375,44 @@ plotcorr(ordered.cor.fm[(1:max.show),(1:max.show)], col=cm.colors(11)[5*ordered.cor.fm + 6]) }, "4L" ={ - barplot(fit.stat$r2[1:max.show]) + barplot(x$r2[1:max.show]) }, "5L" = { - barplot(fit.stat$resid.variance[1:max.show]) + barplot(x$resid.variance[1:max.show]) }, "6L" = { - cov.factors = var(fit.stat$factors) - names = colnames(fit.stat$asset.ret) + cov.factors = var(x$factors) + names = colnames(x$asset.ret) factor.sd.decomp.list = list() for (i in names) { factor.sd.decomp.list[[i]] = - factorModelSdDecomposition(fit.stat$loadings[,i], - cov.factors, fit.stat$resid.variance[i]) + factorModelSdDecomposition(x$loadings[,i], + cov.factors, x$resid.variance[i]) } - # function to efit.stattract contribution to sd from list + # function to extract contribution to sd from list getCSD = function(x) { x$cr.fm } # extract contributions to SD from list cr.sd = sapply(factor.sd.decomp.list, getCSD) - rownames(cr.sd) = c(colnames(fit.stat$factors), "residual") + rownames(cr.sd) = c(colnames(x$factors), "residual") # create stacked barchart barplot(cr.sd[,(1:max.show)], main="Factor Contributions to SD", legend.text=T, args.legend=list(x="topleft")) } , "7L" ={ factor.es.decomp.list = list() - names = colnames(fit.stat$asset.ret) + names = colnames(x$asset.ret) for (i in names) { # check for missing values in fund data - idx = which(!is.na(fit.stat$asset.ret[,i])) - tmpData = cbind(fit.stat$asset.ret[idx,i], fit.stat$factors, - fit.stat$residuals[,i]/sqrt(fit.stat$resid.variance[i])) + idx = which(!is.na(x$asset.ret[,i])) + tmpData = cbind(x$asset.ret[idx,i], x$factors, + x$residuals[,i]/sqrt(x$resid.variance[i])) colnames(tmpData)[c(1,length(tmpData[1,]))] = c(i, "residual") factor.es.decomp.list[[i]] = factorModelEsDecomposition(tmpData, - fit.stat$loadings[,i], - fit.stat$resid.variance[i], tail.prob=0.05) + x$loadings[,i], + x$resid.variance[i], tail.prob=0.05) } @@ -421,23 +422,23 @@ } # report as positive number cr.etl = sapply(factor.es.decomp.list, getCETL) - rownames(cr.etl) = c(colnames(fit.stat$factors), "residual") + rownames(cr.etl) = c(colnames(x$factors), "residual") barplot(cr.etl[,(1:max.show)], main="Factor Contributions to ES", legend.text=T, args.legend=list(x="topleft") ) }, "8L" = { factor.VaR.decomp.list = list() - names = colnames(fit.stat$asset.ret) + names = colnames(x$asset.ret) for (i in names) { # check for missing values in fund data - idx = which(!is.na(fit.stat$asset.ret[,i])) - tmpData = cbind(fit.stat$asset.ret[idx,i], fit.stat$factors, - fit.stat$residuals[,i]/sqrt(fit.stat$resid.variance[i])) + idx = which(!is.na(x$asset.ret[,i])) + tmpData = cbind(x$asset.ret[idx,i], x$factors, + x$residuals[,i]/sqrt(x$resid.variance[i])) colnames(tmpData)[c(1,length(tmpData[1,]))] = c(i, "residual") factor.VaR.decomp.list[[i]] = factorModelVaRDecomposition(tmpData, - fit.stat$loadings[,i], - fit.stat$resid.variance[i], tail.prob=0.05) + x$loadings[,i], + x$resid.variance[i], tail.prob=0.05) } @@ -447,7 +448,7 @@ } # report as positive number cr.var = sapply(factor.VaR.decomp.list, getCVaR) - rownames(cr.var) = c(colnames(fit.stat$factors), "residual") + rownames(cr.var) = c(colnames(x$factors), "residual") barplot(cr.var[,(1:max.show)], main="Factor Contributions to VaR", legend.text=T, args.legend=list(x="topleft")) }, invisible() @@ -455,4 +456,5 @@ ) } + } Modified: pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r 2013-08-02 12:15:36 UTC (rev 2699) +++ pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r 2013-08-02 18:01:50 UTC (rev 2700) @@ -4,7 +4,7 @@ #' all fit models or choose a single asset to plot. #' #' -#' @param fit.macro fit object created by fitTimeSeriesFactorModel. +#' @param x fit object created by fitTimeSeriesFactorModel. #' @param colorset Defualt colorset is c(1:12). #' @param legend.loc plot legend or not. Defualt is \code{NULL}. #' @param which.plot integer indicating which plot to create: "none" will @@ -39,9 +39,10 @@ #' # single plot of HAM1 asset #' plot(fit.macro, plot.single=TRUE, asset.name="HAM1") #' } -#' +#' @method plot TimeSeriesFactorModel +#' @export plot.TimeSeriesFactorModel <- - function(fit.macro,colorset=c(1:12),legend.loc=NULL, + function(x,colorset=c(1:12),legend.loc=NULL, which.plot=c("none","1L","2L","3L","4L","5L","6L","7L"),max.show=6, plot.single=FALSE, asset.name,which.plot.single=c("none","1L","2L","3L","4L","5L","6L", "7L","8L","9L","10L","11L","12L","13L")) { @@ -75,9 +76,9 @@ stop("Neet to specify an asset to plot if plot.single is TRUE.") } - fit.lm = fit.macro$asset.fit[[asset.name]] + fit.lm = x$asset.fit[[asset.name]] - if (fit.macro$variable.selection == "none") { + if (x$variable.selection == "none") { ## extract information from lm object @@ -156,7 +157,7 @@ }, "10L"= { ## CUSUM plot of recursive residuals - if (as.character(fit.macro$call["fit.method"]) == "OLS") { + if (as.character(x$call["fit.method"]) == "OLS") { cusum.rec = efp(fit.formula, type="Rec-CUSUM", data=fit.lm$model) plot(cusum.rec, sub=asset.name) } else @@ -164,7 +165,7 @@ }, "11L"= { ## CUSUM plot of OLS residuals - if (as.character(fit.macro$call["fit.method"]) == "OLS") { + if (as.character(x$call["fit.method"]) == "OLS") { cusum.ols = efp(fit.formula, type="OLS-CUSUM", data=fit.lm$model) plot(cusum.ols, sub=asset.name) } else @@ -172,7 +173,7 @@ }, "12L"= { ## CUSUM plot of recursive estimates relative to full sample estimates - if (as.character(fit.macro$call["fit.method"]) == "OLS") { + if (as.character(x$call["fit.method"]) == "OLS") { cusum.est = efp(fit.formula, type="fluctuation", data=fit.lm$model) plot(cusum.est, functional=NULL, sub=asset.name) } else @@ -180,7 +181,7 @@ }, "13L"= { ## rolling regression over 24 month window - if (as.character(fit.macro$call["fit.method"]) == "OLS") { + if (as.character(x$call["fit.method"]) == "OLS") { rollReg <- function(data.z, formula) { coef(lm(formula, data = as.data.frame(data.z))) } @@ -188,8 +189,8 @@ rollReg.z = rollapply(reg.z, FUN=rollReg, fit.formula, width=24, by.column = FALSE, align="right") plot(rollReg.z, main=paste("24-month rolling regression estimates:", asset.name, sep=" ")) - } else if (as.character(fit.macro$call["fit.method"]) == "DLS") { - decay.factor <- as.numeric(as.character(fit.macro$call["decay.factor"])) + } else if (as.character(x$call["fit.method"]) == "DLS") { + decay.factor <- as.numeric(as.character(x$call["decay.factor"])) t.length <- 24 w <- rep(decay.factor^(t.length-1),t.length) for (k in 2:t.length) { @@ -213,10 +214,10 @@ } else { # lar or lasso - factor.names = fit.macro$factors.names - plot.data = fit.macro$data[,c(asset.name,factor.names)] - alpha = fit.macro$alpha[asset.name] - beta = as.matrix(fit.macro$beta[asset.name,]) + factor.names = x$factors.names + plot.data = x$data[,c(asset.name,factor.names)] + alpha = x$alpha[asset.name] + beta = as.matrix(x$beta[asset.name,]) fitted.z = zoo(alpha+as.matrix(plot.data[,factor.names])%*%beta,as.Date(rownames(plot.data))) residuals.z = plot.data[,asset.name]-fitted.z actual.z = zoo(plot.data[,asset.name],as.Date(rownames(plot.data))) @@ -303,10 +304,10 @@ title="Factor Analytics Plot \nMake a plot selection (or 0 to exit):\n") - variable.selection = fit.macro$variable.selection - asset.names = fit.macro$assets.names - factor.names = fit.macro$factors.names - plot.data = fit.macro$data[,c(asset.names,factor.names)] + variable.selection = x$variable.selection + asset.names = x$assets.names + factor.names = x$factors.names + plot.data = x$data[,c(asset.names,factor.names)] cov.factors = var(plot.data[,factor.names]) n <- length(asset.names) @@ -321,8 +322,8 @@ par(mfrow=c(n/2,2)) if (variable.selection == "lar" || variable.selection == "lasso") { for (i in 1:n) { - alpha = fit.macro$alpha[i] - beta = as.matrix(fit.macro$beta[i,]) + alpha = x$alpha[i] + beta = as.matrix(x$beta[i,]) fitted = alpha+as.matrix(plot.data[,factor.names])%*%beta dataToPlot = cbind(fitted, plot.data[,i]) colnames(dataToPlot) = c("Fitted","Actual") @@ -331,7 +332,7 @@ } } else { for (i in 1:n) { - dataToPlot = cbind(fitted(fit.macro$asset.fit[[i]]), na.omit(plot.data[,i])) + dataToPlot = cbind(fitted(x$asset.fit[[i]]), na.omit(plot.data[,i])) colnames(dataToPlot) = c("Fitted","Actual") main = paste("Factor Model fit for",asset.names[i],seq="") chart.TimeSeries(dataToPlot,colorset = colorset, legend.loc = legend.loc,main=main) @@ -340,14 +341,14 @@ par(mfrow=c(1,1)) }, "2L" ={ - barplot(fit.macro$r2) + barplot(x$r2) }, "3L" = { - barplot(fit.macro$resid.variance) + barplot(x$resid.variance) }, "4L" = { - cov.fm<- factorModelCovariance(fit.macro$beta,cov.factors,fit.macro$resid.variance) + cov.fm<- factorModelCovariance(x$beta,cov.factors,x$resid.variance) cor.fm = cov2cor(cov.fm) rownames(cor.fm) = colnames(cor.fm) ord <- order(cor.fm[1,]) @@ -358,8 +359,8 @@ factor.sd.decomp.list = list() for (i in asset.names) { factor.sd.decomp.list[[i]] = - factorModelSdDecomposition(fit.macro$beta[i,], - cov.factors, fit.macro$resid.variance[i]) + factorModelSdDecomposition(x$beta[i,], + cov.factors, x$resid.variance[i]) } # function to extract contribution to sd from list getCSD = function(x) { @@ -379,17 +380,17 @@ for (i in asset.names) { idx = which(!is.na(plot.data[,i])) - alpha = fit.macro$alpha[i] - beta = as.matrix(fit.macro$beta[i,]) + alpha = x$alpha[i] + beta = as.matrix(x$beta[i,]) fitted = alpha+as.matrix(plot.data[,factor.names])%*%beta residual = plot.data[,i]-fitted tmpData = cbind(plot.data[idx,i], plot.data[idx,factor.names], - (residual[idx,]/sqrt(fit.macro$resid.variance[i])) ) + (residual[idx,]/sqrt(x$resid.variance[i])) ) colnames(tmpData)[c(1,length(tmpData))] = c(i, "residual") factor.es.decomp.list[[i]] = factorModelEsDecomposition(tmpData, - fit.macro$beta[i,], - fit.macro$resid.variance[i], tail.prob=0.05) + x$beta[i,], + x$resid.variance[i], tail.prob=0.05) } } else { @@ -398,12 +399,12 @@ # check for missing values in fund data idx = which(!is.na(plot.data[,i])) tmpData = cbind(plot.data[idx,i], plot.data[idx,factor.names], - residuals(fit.macro$asset.fit[[i]])/sqrt(fit.macro$resid.variance[i])) + residuals(x$asset.fit[[i]])/sqrt(x$resid.variance[i])) colnames(tmpData)[c(1,length(tmpData))] = c(i, "residual") factor.es.decomp.list[[i]] = factorModelEsDecomposition(tmpData, - fit.macro$beta[i,], - fit.macro$resid.variance[i], tail.prob=0.05) + x$beta[i,], + x$resid.variance[i], tail.prob=0.05) } } @@ -425,17 +426,17 @@ for (i in asset.names) { idx = which(!is.na(plot.data[,i])) - alpha = fit.macro$alpha[i] - beta = as.matrix(fit.macro$beta[i,]) + alpha = x$alpha[i] + beta = as.matrix(x$beta[i,]) fitted = alpha+as.matrix(plot.data[,factor.names])%*%beta residual = plot.data[,i]-fitted tmpData = cbind(plot.data[idx,i], plot.data[idx,factor.names], - (residual[idx,]/sqrt(fit.macro$resid.variance[i])) ) + (residual[idx,]/sqrt(x$resid.variance[i])) ) colnames(tmpData)[c(1,length(tmpData))] = c(i, "residual") factor.VaR.decomp.list[[i]] = factorModelVaRDecomposition(tmpData, - fit.macro$beta[i,], - fit.macro$resid.variance[i], tail.prob=0.05) + x$beta[i,], + x$resid.variance[i], tail.prob=0.05) } } else { @@ -443,12 +444,12 @@ # check for missing values in fund data idx = which(!is.na(plot.data[,i])) tmpData = cbind(plot.data[idx,i], plot.data[idx,factor.names], - residuals(fit.macro$asset.fit[[i]])/sqrt(fit.macro$resid.variance[i])) + residuals(x$asset.fit[[i]])/sqrt(x$resid.variance[i])) colnames(tmpData)[c(1,length(tmpData))] = c(i, "residual") factor.VaR.decomp.list[[i]] = factorModelVaRDecomposition(tmpData, - fit.macro$beta[i,], - fit.macro$resid.variance[i], tail.prob=0.05, + x$beta[i,], + x$resid.variance[i], tail.prob=0.05, VaR.method="HS") } } @@ -466,5 +467,5 @@ invisible() ) } - + } Modified: pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r 2013-08-02 12:15:36 UTC (rev 2699) +++ pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r 2013-08-02 18:01:50 UTC (rev 2700) @@ -5,35 +5,36 @@ #' newdata must be data.frame and contians date variable, asset variable and exact #' exposures names that are used in fit object by \code{fitFundamentalFactorModel} #' -#' @param fit "FundamentalFactorModel" object +#' @param object fit "FundamentalFactorModel" object #' @param newdata An optional data frame in which to look for variables with which to predict. #' If omitted, the fitted values are used. #' @param new.assetvar specify new asset variable in newdata if newdata is provided. -#' @param new.datevar speficy new date variable in newdata if newdata is provided. +#' @param new.datevar speficy new date variable in newdata if newdata is provided. +#' @method predict FundamentalFactorModel #' @export #' @author Yi-An Chen #' -predict.FundamentalFactorModel <- function(fit.fund,newdata,new.assetvar,new.datevar){ +predict.FundamentalFactorModel <- function(object,newdata,new.assetvar,new.datevar){ # if there is no newdata provided # calculate fitted values - datevar <- as.character(fit.fund$datevar) - assetvar <- as.character(fit.fund$assetvar) - assets = unique(fit.fund$data[,assetvar]) - timedates = as.Date(unique(fit.fund$data[,datevar])) - exposure.names <- fit.fund$exposure.names + datevar <- as.character(object$datevar) + assetvar <- as.character(object$assetvar) + assets = unique(object$data[,assetvar]) + timedates = as.Date(unique(object$data[,datevar])) + exposure.names <- object$exposure.names numTimePoints <- length(timedates) numExposures <- length(exposure.names) numAssets <- length(assets) - f <- fit.fund$factor.returns # T X 3 + f <- object$factor.returns # T X 3 predictor <- function(data) { fitted <- rep(NA,numAssets) for (i in 1:numTimePoints) { - fit.tmp <- fit.fund$beta %*% t(f[i,]) + fit.tmp <- object$beta %*% t(f[i,]) fitted <- rbind(fitted,t(fit.tmp)) } fitted <- fitted[-1,] @@ -63,7 +64,7 @@ } if (missing(newdata) || is.null(newdata)) { - ans <- predictor(fit.fund$data) + ans <- predictor(object$data) } # predict returns by newdata @@ -82,6 +83,5 @@ ans <- predictor.new(newdata,new.datevar,new.assetvar) } } - return(ans) } \ No newline at end of file Modified: pkg/FactorAnalytics/R/predict.StatFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/predict.StatFactorModel.r 2013-08-02 12:15:36 UTC (rev 2699) +++ pkg/FactorAnalytics/R/predict.StatFactorModel.r 2013-08-02 18:01:50 UTC (rev 2700) @@ -3,7 +3,7 @@ #' Generic function of predict method for fitStatisticalFactorModel. It utilizes #' function \code{predict.lm}. #' -#' @param fit.stat "StatFactorModel" object created by fitStatisticalFactorModel. +#' @param object A fit object created by fitStatisticalFactorModel. #' @param newdata a vector, matrix, data.frame, xts, timeSeries or zoo object to be coerced. #' @param ... Any other arguments used in \code{predict.lm}. For example like newdata and fit.se. #' @author Yi-An Chen. @@ -14,16 +14,17 @@ [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2700 From noreply at r-forge.r-project.org Fri Aug 2 21:07:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 21:07:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2701 - in pkg/FactorAnalytics: . R man Message-ID: <20130802190735.773031855FE@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 21:07:35 +0200 (Fri, 02 Aug 2013) New Revision: 2701 Removed: pkg/FactorAnalytics/man/CornishFisher.Rd Modified: pkg/FactorAnalytics/NAMESPACE pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r pkg/FactorAnalytics/man/ pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd Log: Modified: pkg/FactorAnalytics/NAMESPACE =================================================================== --- pkg/FactorAnalytics/NAMESPACE 2013-08-02 18:01:50 UTC (rev 2700) +++ pkg/FactorAnalytics/NAMESPACE 2013-08-02 19:07:35 UTC (rev 2701) @@ -1,14 +1,18 @@ +export(dCornishFisher) export(factorModelMonteCarlo) export(fitTimeSeriesFactorModel) +export(pCornishFisher) +export(qCornishFisher) +export(rCornishFisher) S3method(plot,FundamentalFactorModel) S3method(plot,StatFactorModel) S3method(plot,TimeSeriesFactorModel) S3method(predict,FundamentalFactorModel) S3method(predict,StatFactorModel) +S3method(predict,TimeSeriesFactorModel) S3method(print,FundamentalFactorModel) S3method(print,StatFactorModel) S3method(print,TimeSeriesFactorModel) S3method(summary,FundamentalFactorModel) S3method(summary,StatFactorModel) S3method(summary,TimeSeriesFactorModel) -S3method(TimeSeriesFactorModel) Modified: pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r 2013-08-02 18:01:50 UTC (rev 2700) +++ pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r 2013-08-02 19:07:35 UTC (rev 2701) @@ -21,7 +21,7 @@ #' predict(fit) #' predict(fit,newdata,interval="confidence") #' -#' @method TimeSeriesFactorModel +#' @method predict TimeSeriesFactorModel #' @export #' Property changes on: pkg/FactorAnalytics/man ___________________________________________________________________ Modified: svn:ignore - covEWMA.Rd plot.MacroFactorModel.Rd print.MacroFactorModel.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd + CornishFisher.Rd covEWMA.Rd plot.MacroFactorModel.Rd print.MacroFactorModel.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd Deleted: pkg/FactorAnalytics/man/CornishFisher.Rd =================================================================== --- pkg/FactorAnalytics/man/CornishFisher.Rd 2013-08-02 18:01:50 UTC (rev 2700) +++ pkg/FactorAnalytics/man/CornishFisher.Rd 2013-08-02 19:07:35 UTC (rev 2701) @@ -1,74 +0,0 @@ -\name{CornishFisher} -\alias{CornishFisher} -\alias{dCornishFisher} -\alias{pCornishFisher} -\alias{qCornishFisher} -\alias{rCornishFisher} -\title{Functions for Cornish-Fisher density, CDF, random number simulation and -quantile.} -\arguments{ - \item{n}{scalar, number of simulated values in - rCornishFisher. Sample length in - density,distribution,quantile function.} - - \item{sigma}{scalar, standard deviation.} - - \item{skew}{scalar, skewness.} - - \item{ekurt}{scalar, excess kurtosis.} - - \item{seed}{set seed here. Default is \code{NULL}.} - - \item{x,q}{vector of standardized quantiles. See detail.} - - \item{p}{vector of probabilities.} -} -\value{ - n simulated values from Cornish-Fisher distribution. -} -\description{ - \code{dCornishFisher} Computes Cornish-Fisher density - from two term Edgeworth expansion given mean, standard - deviation, skewness and excess kurtosis. - \code{pCornishFisher} Computes Cornish-Fisher CDF from - two term Edgeworth expansion given mean, standard - deviation, skewness and excess kurtosis. - \code{qCornishFisher} Computes Cornish-Fisher quantiles - from two term Edgeworth expansion given mean, standard - deviation, skewness and excess kurtosis. - \code{rCornishFisher} simulate observations based on - Cornish-Fisher quantile expansion given mean, standard - deviation, skewness and excess kurtosis. -} -\details{ - CDF(q) = Pr(sqrt(n)*(x_bar-mu)/sigma < q) -} -\examples{ -# generate 1000 observation from Cornish-Fisher distribution -rc <- rCornishFisher(1000,1,0,5) -hist(rc,breaks=100,freq=FALSE,main="simulation of Cornish Fisher Distribution", - xlim=c(-10,10)) -lines(seq(-10,10,0.1),dnorm(seq(-10,10,0.1),mean=0,sd=1),col=2) -# compare with standard normal curve - -# example from A.dasGupta p.188 exponential example -# x is iid exp(1) distribution, sample size = 5 -# then x_bar is Gamma(shape=5,scale=1/5) distribution -q <- c(0,0.4,1,2) -# exact cdf -pgamma(q/sqrt(5)+1,shape=5,scale=1/5) -# use CLT -pnorm(q) -# use edgeworth expansion -pCornishFisher(q,n=5,skew=2,ekurt=6) -} -\author{ - Eric Zivot and Yi-An Chen. -} -\references{ - A.DasGupta, "Asymptotic Theory of Statistics and - Probability", Springer Science+Business Media,LLC 2008 - Thomas A.Severini, "Likelihood Methods in Statistics", - Oxford University Press, 2000 -} - Modified: pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd 2013-08-02 18:01:50 UTC (rev 2700) +++ pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd 2013-08-02 19:07:35 UTC (rev 2701) @@ -2,8 +2,8 @@ \alias{predict.TimeSeriesFactorModel} \title{predict method for TimeSeriesModel object.} \usage{ - \method{TimeSeriesFactorModel}{} (object, newdata = NULL, - ...) + \method{predict}{TimeSeriesFactorModel} (object, + newdata = NULL, ...) } \arguments{ \item{object}{A fit object created by From noreply at r-forge.r-project.org Fri Aug 2 21:14:57 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 21:14:57 +0200 (CEST) Subject: [Returnanalytics-commits] r2702 - in pkg/FactorAnalytics: R man Message-ID: <20130802191458.0555218460E@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 21:14:57 +0200 (Fri, 02 Aug 2013) New Revision: 2702 Removed: pkg/FactorAnalytics/R/plot.FM.attribution.r pkg/FactorAnalytics/R/summary.FM.attribution.r pkg/FactorAnalytics/man/Style.Rd pkg/FactorAnalytics/man/factorModelPerformanceAttribution.Rd pkg/FactorAnalytics/man/impliedFactorReturns.Rd pkg/FactorAnalytics/man/modifiedEsReport.Rd pkg/FactorAnalytics/man/modifiedIncrementalES.Rd pkg/FactorAnalytics/man/modifiedIncrementalVaR.Rd pkg/FactorAnalytics/man/modifiedPortfolioEsDecomposition.Rd pkg/FactorAnalytics/man/modifiedPortfolioVaRDecomposition.Rd pkg/FactorAnalytics/man/modifiedVaRReport.Rd pkg/FactorAnalytics/man/nonparametricEsReport.Rd pkg/FactorAnalytics/man/nonparametricIncrementalES.Rd pkg/FactorAnalytics/man/nonparametricIncrementalVaR.Rd pkg/FactorAnalytics/man/nonparametricPortfolioEsDecomposition.Rd pkg/FactorAnalytics/man/nonparametricPortfolioVaRDecomposition.Rd pkg/FactorAnalytics/man/nonparametricVaRReport.Rd pkg/FactorAnalytics/man/normalEsReport.Rd pkg/FactorAnalytics/man/normalIncrementalES.Rd pkg/FactorAnalytics/man/normalIncrementalVaR.Rd pkg/FactorAnalytics/man/normalPortfolioEsDecomposition.Rd pkg/FactorAnalytics/man/normalPortfolioVaRDecomposition.Rd pkg/FactorAnalytics/man/normalVaRReport.Rd pkg/FactorAnalytics/man/plot.FM.attribution.Rd pkg/FactorAnalytics/man/scenarioPredictions.Rd pkg/FactorAnalytics/man/scenarioPredictionsPortfolio.Rd pkg/FactorAnalytics/man/summary.FM.attribution.Rd Modified: pkg/FactorAnalytics/R/ pkg/FactorAnalytics/man/ Log: temporally remove .r and Rd files to implement R CMD check Property changes on: pkg/FactorAnalytics/R ___________________________________________________________________ Modified: svn:ignore - FactorAnalytics-package.R bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R dCornishFisher.R factorModelFactorRiskDecomposition.r factorModelGroupRiskDecomposition.r factorModelPerformanceAttribution.r factorModelPortfolioRiskDecomposition.r factorModelRiskAttribution.r factorModelRiskDecomposition.r factorModelSimulation.r impliedFactorReturns.R modifiedEsReport.R modifiedIncrementalES.R modifiedIncrementalVaR.R modifiedPortfolioEsDecomposition.R modifiedPortfolioVaRDecomposition.R modifiedVaRReport.R nonparametricEsReport.R nonparametricIncrementalES.R nonparametricIncrementalVaR.R nonparametricPortfolioEsDecomposition.R nonparametricPortfolioVaRDecomposition.R nonparametricVaRReport.R normalEsReport.R normalIncrementalES.R normalIncrementalVaR.R normalPortfolioEsDecomposition.R normalPortfolioVaRDecomposition.R normalVaRReport.R pCornishFisher.R plot.MacroFactorModel.r print.MacroFactorModel.r qCornishFisher.R rCornishFisher.R scenarioPredictions.r scenarioPredictionsPortfolio.r style.QPfit.R style.fit.R summary.MacroFactorModel.r table.RollingStyle.R + FactorAnalytics-package.R bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R dCornishFisher.R factorModelFactorRiskDecomposition.r factorModelGroupRiskDecomposition.r factorModelPerformanceAttribution.r factorModelPortfolioRiskDecomposition.r factorModelRiskAttribution.r factorModelRiskDecomposition.r factorModelSimulation.r impliedFactorReturns.R modifiedEsReport.R modifiedIncrementalES.R modifiedIncrementalVaR.R modifiedPortfolioEsDecomposition.R modifiedPortfolioVaRDecomposition.R modifiedVaRReport.R nonparametricEsReport.R nonparametricIncrementalES.R nonparametricIncrementalVaR.R nonparametricPortfolioEsDecomposition.R nonparametricPortfolioVaRDecomposition.R nonparametricVaRReport.R normalEsReport.R normalIncrementalES.R normalIncrementalVaR.R normalPortfolioEsDecomposition.R normalPortfolioVaRDecomposition.R normalVaRReport.R pCornishFisher.R plot.FM.attribution.r plot.MacroFactorModel.r print.MacroFactorModel.r qCornishFisher.R rCornishFisher.R scenarioPredictions.r scenarioPredictionsPortfolio.r style.QPfit.R style.fit.R summary.FM.attribution.r summary.MacroFactorModel.r table.RollingStyle.R Deleted: pkg/FactorAnalytics/R/plot.FM.attribution.r =================================================================== --- pkg/FactorAnalytics/R/plot.FM.attribution.r 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/R/plot.FM.attribution.r 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,125 +0,0 @@ -# plot.FM.attribution.r -# Yi-An Chen -# 8/1/2012 - - - -#' plot FM.attribution class -#' -#' Generic function of plot method for factorModelPerformanceAttribution. -#' Either plot all fit models or choose a single asset to plot. -#' -#' -#' @param fm.attr FM.attribution object created by -#' factorModelPerformanceAttribution. -#' @param which.plot integer indicating which plot to create: "none" will -#' create a menu to choose. Defualt is none. 1 = attributed cumulative returns, -#' 2 = attributed returns on date selected by user, 3 = time series of -#' attributed returns -#' @param max.show Maximum assets to plot. Default is 6. -#' @param date date indicates for attributed returns, the date format should be -#' the same as data. -#' @param plot.single Plot a single asset of lm class. Defualt is FALSE. -#' @param fundName Name of the portfolio to be plotted. -#' @param which.plot.single integer indicating which plot to create: "none" -#' will create a menu to choose. Defualt is none. 1 = attributed cumulative -#' returns, 2 = attributed returns on date selected by user, 3 = time series of -#' attributed returns -#' @param ... more arguements for \code{chart.TimeSeries} used for plotting -#' time series -#' @author Yi-An Chen. -#' @examples -#' -#' \dontrun{ -#' # load data from the database -#' data(managers.df) -#' ret.assets = managers.df[,(1:6)] -#' factors = managers.df[,(7:9)] -#' # fit the factor model with OLS -#' fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", -#' variable.selection="all subsets") -#' fm.attr <- factorModelPerformanceAttribution(fit.macro) -#' # group plot -#' plot(fm.attr,date="2006-12-30") -#' # single portfolio plot -#' plot(fm.attr,date="2006-12-30") -#' } -#' -plot.FM.attribution <- function(fm.attr, which.plot=c("none","1L","2L","3L"),max.show=6, - date,plot.single=FALSE,fundName, - which.plot.single=c("none","1L","2L","3L"),...) { - # ... for chart.TimeSeries - require(PerformanceAnalytics) - # plot single assets - if (plot.single==TRUE){ - - which.plot.single<-which.plot.single[1] - - if (which.plot.single=="none") - which.plot.single<-menu(c("attributed cumulative returns", - paste("attributed returns","on",date,sep=" "), - "Time series of attributed returns"), - title="performance attribution plot \nMake a plot selection (or 0 to exit):\n") - switch(which.plot.single, - "1L" = { - bar <- c(fm.attr$cum.spec.ret[fundName],fm.attr$cum.ret.attr.f[fundName,]) - names(bar)[1] <- "specific.returns" - barplot(bar,horiz=TRUE,main="cumulative attributed returns",las=1) - }, - "2L" ={ - bar <- coredata(fm.attr$attr.list[[fundName]][as.Date(date)]) - barplot(bar,horiz=TRUE,main=fundName,las=1) - }, - "3L" = { - chart.TimeSeries(fm.attr$attr.list[[fundName]], - main=paste("Time series of attributed returns of ",fundName,sep=""),... ) - }, - invisible()) - } - # plot all assets - else { - which.plot<-which.plot[1] - fundnames <- rownames(fm.attr$cum.ret.attr.f) - n <- length(fundnames) - - if(which.plot=='none') - which.plot<-menu(c("attributed cumulative returns", - paste("attributed returns","on",date,sep=" "), - "time series of attributed returns"), - title="performance attribution plot \nMake a plot selection (or 0 to exit):\n") - if (n >= max.show) { - cat(paste("numbers of assets are greater than",max.show,", show only first", - max.show,"assets",sep=" ")) - n <- max.show - } - switch(which.plot, - - "1L" = { - par(mfrow=c(2,n/2)) - for (i in fundnames[1:n]) { - bar <- c(fm.attr$cum.spec.ret[i],fm.attr$cum.ret.attr.f[i,]) - names(bar)[1] <- "specific.returns" - barplot(bar,horiz=TRUE,main=i,las=1) - } - par(mfrow=c(1,1)) - }, - "2L" ={ - par(mfrow=c(2,n/2)) - for (i in fundnames[1:n]) { - bar <- coredata(fm.attr$attr.list[[i]][as.Date(date)]) - barplot(bar,horiz=TRUE,main=i,las=1) - } - par(mfrow=c(1,1)) - }, - "3L" = { - par(mfrow=c(2,n/2)) - for (i in fundnames[1:n]) { - chart.TimeSeries(fm.attr$attr.list[[i]],main=i) - } - par(mfrow=c(1,1)) - }, - invisible() - ) - - } - } Deleted: pkg/FactorAnalytics/R/summary.FM.attribution.r =================================================================== --- pkg/FactorAnalytics/R/summary.FM.attribution.r 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/R/summary.FM.attribution.r 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,25 +0,0 @@ -# summary.FM.attribution.r -# Yi-An Chen -# 8/1/2012 - - - -#' summary FM.attribution object. -#' -#' Generic function of summary method for factorModelPerformanceAttribution. -#' -#' -#' @param fm.attr FM.attribution object created by -#' factorModelPerformanceAttribution. -#' @author Yi-An Chen. -#' @examples -#' -#' \dontrun{ -#' fm.attr <- factorModelPerformanceAttribution(fit.macro) -#' summary(fm.attr) -#' } -#' -#' -summary.FM.attribution <- function(fm.attr) { - lapply(fm.attr[[3]],summary) -} Property changes on: pkg/FactorAnalytics/man ___________________________________________________________________ Modified: svn:ignore - CornishFisher.Rd covEWMA.Rd plot.MacroFactorModel.Rd print.MacroFactorModel.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd + CornishFisher.Rd Style.Rd covEWMA.Rd factorModelPerformanceAttribution.Rd impliedFactorReturns.Rd modifiedEsReport.Rd modifiedIncrementalES.Rd modifiedIncrementalVaR.Rd modifiedPortfolioEsDecomposition.Rd modifiedPortfolioVaRDecomposition.Rd modifiedVaRReport.Rd nonparametricEsReport.Rd nonparametricIncrementalES.Rd nonparametricIncrementalVaR.Rd nonparametricPortfolioEsDecomposition.Rd nonparametricPortfolioVaRDecomposition.Rd nonparametricVaRReport.Rd normalEsReport.Rd normalIncrementalES.Rd normalIncrementalVaR.Rd normalPortfolioEsDecomposition.Rd normalPortfolioVaRDecomposition.Rd normalVaRReport.Rd plot.FM.attribution.Rd plot.MacroFactorModel.Rd print.MacroFactorModel.Rd scenarioPredictions.Rd scenarioPredictionsPortfolio.Rd summary.FM.attribution.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd Deleted: pkg/FactorAnalytics/man/Style.Rd =================================================================== --- pkg/FactorAnalytics/man/Style.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/Style.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,104 +0,0 @@ -\name{chart.Style} -\alias{chart.Style} -\alias{chart.RollingStyle} -\alias{table.RollingStyle} -\alias{style.fit} -\alias{style.QPfit} -%- Also NEED an '\alias' for EACH other topic documented here. -\title{ calculate and display effective style weights } -\description{ - Functions that calculate effective style weights and display the results in a bar chart. \code{chart.Style} calculates and displays style weights calculated over a single period. \code{chart.RollingStyle} calculates and displays those weights in rolling windows through time. \code{style.fit} manages the calculation of the weights by method. \code{style.QPfit} calculates the specific constraint case that requires quadratic programming. -} -\usage{ -chart.Style(R.fund, R.style, method = c("constrained", "unconstrained", "normalized"), leverage = FALSE, main = NULL, ylim = NULL, unstacked=TRUE, ...) - -chart.RollingStyle(R.fund, R.style, method = c("constrained","unconstrained","normalized"), leverage = FALSE, width = 12, main = NULL, space = 0, ...) - -style.fit(R.fund, R.style, model=FALSE, method = c("constrained", "unconstrained", "normalized"), leverage = FALSE, selection = c("none", "AIC"), ...) - -style.QPfit(R.fund, R.style, model = FALSE, leverage = FALSE, ...) - -} -%- maybe also 'usage' for other objects documented here. -\arguments{ - \item{R.fund}{ matrix, data frame, or zoo object with fund returns to be analyzed } - \item{R.style}{ matrix, data frame, or zoo object with style index returns. Data object must be of the same length and time-aligned with R.fund } - \item{method}{ specify the method of calculation of style weights as "constrained", "unconstrained", or "normalized". For more information, see \code{\link{style.fit}} } - \item{leverage}{ logical, defaults to 'FALSE'. If 'TRUE', the calculation of weights assumes that leverage may be used. For more information, see \code{\link{style.fit}} } - \item{model}{ logical. If 'model' = TRUE in \code{\link{style.QPfit}}, the full result set is shown from the output of \code{solve.QP}. } - \item{selection}{ either "none" (default) or "AIC". If "AIC", then the function uses a stepwise regression to identify find the model with minimum AIC value. See \code{\link{step}} for more detail.} - \item{unstacked}{ logical. If set to 'TRUE' \emph{and} only one row of data is submitted in 'w', then the chart creates a normal column chart. If more than one row is submitted, then this is ignored. See examples below. } - \item{space}{ the amount of space (as a fraction of the average bar width) left before each bar, as in \code{\link{barplot}}. Default for \code{chart.RollingStyle} is 0; for \code{chart.Style} the default is 0.2. } - \item{main}{ set the chart title, same as in \code{\link{plot}} } - \item{width}{ number of periods or window to apply rolling style analysis over } - \item{ylim}{ set the y-axis limit, same as in \code{\link{plot}} } - \item{\dots}{ for the charting functions, these are arguments to be passed to \code{\link{barplot}}. These can include further arguments (such as 'axes', 'asp' and 'main') and graphical parameters (see 'par') which are passed to 'plot.window()', 'title()' and 'axis'. For the calculation functions, these are ignored. } -} -\details{ -These functions calculate style weights using an asset class style model as described in detail in Sharpe (1992). The use of quadratic programming to determine a fund's exposures to the changes in returns of major asset classes is usually refered to as "style analysis". - -The "unconstrained" method implements a simple factor model for style analysis, as in: -\deqn{Ri = bi1*F1+bi2*F2+...+bin*Fn+ei}{R_i = b_{i1}F_1+b_{i2}F_2+\dots+b_{in}F_n +e_i} -where \eqn{Ri}{R_i} represents the return on asset i, \eqn{Fj}{F_j} represents each factor, and \eqn{ei}{e_i} represents the "non-factor" component of the return on i. This is simply a multiple regression analysis with fund returns as the dependent variable and asset class returns as the independent variables. The resulting slope coefficients are then interpreted as the fund's historic exposures to asset class returns. In this case, coefficients do not sum to 1. - -The "normalized" method reports the results of a multiple regression analysis similar to the first, but with one constraint: the coefficients are required to add to 1. Coefficients may be negative, indicating short exposures. To enforce the constraint, coefficients are normalized. - -The "constrained" method includes the constraint that the coefficients sum to 1, but adds -that the coefficients must lie between 0 and 1. These inequality constraints require a -quadratic programming algorithm using \code{\link[quadprog]{solve.QP}} from the 'quadprog' package, -and the implementation is discussed under \code{\link{style.QPfit}}. If set to TRUE, -"leverage" allows the sum of the coefficients to exceed 1. - -According to Sharpe (1992), the calculation for the constrained case is represented as: -\deqn{min var(Rf - sum[wi * R.si]) = min var(F - w*S)}{min \sigma(R_f - \sum{w_i * R_s_i}) = min \sigma(F - w*S)} -\deqn{s.t. sum[wi] = 1; wi > 0}{ s.t. \sum{w_i} = 1; w_i > 0} - -Remembering that: - -\deqn{\sigma(aX + bY) = a^2 \sigma(X) + b^2 \sigma(Y) + 2ab cov(X,Y) = \sigma(R.f) + w'*V*w - 2*w'*cov(R.f,R.s)} - -we can drop \eqn{var(Rf)}{\sigma(R_f)} as it isn't a function of weights, multiply both sides by 1/2: - -\deqn{= min (1/2) w'*V*w - C'w}{= min (1/2) w'*V*w - C'w} -\deqn{ s.t. w'*e = 1, w_i > 0}{ s.t. w'*e = 1, w_i > 0} - -Which allows us to use \code{\link[quadprog]{solve.QP}}, which is specified as: -\deqn{min(-d' b + 1/2 b' D b)}{min(-d' b + 1/2 b' D b)} -and the constraints -\deqn{ A' b >= b.0 }{ A' b >= b_0 } - -so: -b is the weight vector, -D is the variance-covariance matrix of the styles -d is the covariance vector between the fund and the styles - -The chart functions then provide a graphical summary of the results. The underlying -function, \code{\link{style.fit}}, provides the outputs of the analysis and more -information about fit, including an R-squared value. - -Styles identified in this analysis may be interpreted as an average of potentially -changing exposures over the period covered. The function \code{\link{chart.RollingStyle}} -may be useful for examining the behavior of a manager's average exposures to asset classes over time, using a rolling-window analysis. - - The chart functions plot a column chart or stacked column chart of the resulting style weights to the current device. Both \code{style.fit} and \code{style.QPfit} produce a list of data frames containing 'weights' and 'R.squared' results. If 'model' = TRUE in \code{style.QPfit}, the full result set is shown from the output of \code{solve.QP}. -} -\references{ -Sharpe, W. Asset Allocation: Management Style and Performance Measurement Journal of Portfolio Management, 1992, 7-19. See \url{ http://www.stanford.edu/~wfsharpe/art/sa/sa.htm} - } -\author{ Peter Carl } -\note{ - None of the functions \code{chart.Style}, \code{style.fit}, and \code{style.QPfit} make any attempt to align the two input data series. The \code{chart.RollingStyle}, on the other hand, does merge the two series and manages the calculation over common periods. -} -\seealso{ \code{\link{barplot}}, \code{\link{par}} } -\examples{ -data(edhec) -data(managers) -style.fit(managers[97:132,2,drop=FALSE],edhec[85:120,], method="constrained", leverage=FALSE) -chart.Style(managers[97:132,2,drop=FALSE],edhec[85:120,], method="constrained", leverage=FALSE, unstack=TRUE, las=3) -chart.RollingStyle(managers[,2,drop=FALSE],edhec[,1:11], method="constrained", leverage=FALSE, width=36, cex.legend = .7, colorset=rainbow12equal, las=1) -} -% Add one or more standard keywords, see file 'KEYWORDS' in the -% R documentation directory. -\keyword{ ts } -\keyword{ multivariate } -\keyword{ hplot } Deleted: pkg/FactorAnalytics/man/factorModelPerformanceAttribution.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelPerformanceAttribution.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/factorModelPerformanceAttribution.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,53 +0,0 @@ -\name{factorModelPerformanceAttribution} -\alias{factorModelPerformanceAttribution} -\title{Compute BARRA-type performance attribution} -\usage{ - factorModelPerformanceAttribution(fit, benchmark = NULL, - ...) -} -\arguments{ - \item{fit}{Class of "MacroFactorModel", - "FundamentalFactorModel" or "statFactorModel".} - - \item{benchmark}{a zoo, vector or data.frame provides - benchmark time series returns.} - - \item{...}{Other controled variables for fit methods.} -} -\value{ - an object of class \code{FM.attribution} containing -} -\description{ - Decompose total returns or active returns into returns - attributed to factors and specific returns. Class of - FM.attribution is generated and generic function - \code{plot} and \code{summary} can be used. -} -\details{ - total returns can be decomposed into returns attributed - to factors and specific returns. \eqn{R_t = \sum_j e_{jt} - * f_{jt} + u_t},t=1..T,\eqn{e_{jt}} is exposure to factor - j and \eqn{f_{jt}} is factor j. The returns attributed to - factor j is \eqn{e_{jt} * f_{jt}} and portfolio specific - returns is \eqn{u_t} -} -\examples{ -\dontrun{ -data(managers.df) -ret.assets = managers.df[,(1:6)] -factors = managers.df[,(7:9)] -fit.macro <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", factor.set = 3, - variable.selection="all subsets",decay.factor = 0.95) -# withoud benchmark -fm.attr <- factorModelPerformanceAttribution(fit.macro) - -} -} -\author{ - Yi-An Chen. -} -\references{ - Grinold,R and Kahn R, \emph{Active Portfolio Management}, - McGraw-Hill. -} - Deleted: pkg/FactorAnalytics/man/impliedFactorReturns.Rd =================================================================== --- pkg/FactorAnalytics/man/impliedFactorReturns.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/impliedFactorReturns.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,53 +0,0 @@ -\name{impliedFactorReturns} -\alias{impliedFactorReturns} -\title{Compute Implied Factor Returns Using Covariance Matrix Approach} -\usage{ - impliedFactorReturns(factor.scenarios, mu.factors, - cov.factors) -} -\arguments{ - \item{factor.scenarios}{m x 1 vector of scenario values - for a subset of the n > m risk factors} - - \item{mu.factors}{\code{n x 1} vector of factor mean - returns.} - - \item{cov.factors}{\code{n x n} factor covariance - matrix.} -} -\value{ - \code{(n - m) x 1} vector of implied factor returns -} -\description{ - Compute risk factor conditional mean returns for a one - group of risk factors given specified returns for another - group of risk factors based on the assumption that all - risk factor returns are multivariately normally - distributed. -} -\details{ - Let \code{y} denote the \code{m x 1} vector of factor - scenarios and \code{x} denote the \code{(n-m) x 1} vector - of other factors. Assume that \code{(y', x')'} has a - multivariate normal distribution with mean \code{(mu.y', - mu.x')'} and covariance matrix partitioned as - \code{(cov.yy, cov.yx, cov.xy, cov.xx)}. Then the implied - factor scenarios are computed as \code{E[x|y] = mu.x + - cov.xy*cov.xx^-1 * (y - mu.y)} -} -\examples{ -# get data -data(managers.df) -factors = managers.df[,(7:9)] -# make up a factor mean returns scenario for factor SP500.TR -factor.scenarios <- 0.1 -names(factor.scenarios) <- "SP500.TR" -mu.factors <- mean(factors) -cov.factors <- var(factors) -# implied factor returns -impliedFactorReturns(factor.scenarios,mu.factors,cov.factors) -} -\author{ - Eric Zivot and Yi-An Chen. -} - Deleted: pkg/FactorAnalytics/man/modifiedEsReport.Rd =================================================================== --- pkg/FactorAnalytics/man/modifiedEsReport.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/modifiedEsReport.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,71 +0,0 @@ -\name{modifiedEsReport} -\alias{modifiedEsReport} -\title{compute ES report via Cornish-Fisher expansion for collection of assets in a -portfolio given simulated (bootstrapped) return data.} -\usage{ - modifiedEsReport(bootData, w, delta.w = 0.001, - tail.prob = 0.01, method = c("derivative", "average"), - nav, nav.p, fundStrategy, i1, i2) -} -\arguments{ - \item{bootData}{B x n matrix of B bootstrap returns on - assets in portfolio.} - - \item{w}{n x 1 vector of portfolio weights.} - - \item{delta.w}{scalar, change in portfolio weight for - computing numerical derivative. Default value is 0.010.} - - \item{tail.prob}{scalar tail probability.} - - \item{method}{character, method for computing marginal - ES. Valid choices are "derivative" for numerical - computation of the derivative of portfolio ES wrt fund - portfolio weight; "average" for approximating E[Ri | - Rp<=VaR]} - - \item{nav}{n x 1 vector of net asset values in each - fund.} - - \item{nav.p}{scalar, net asset value of portfolio - percentage.} - - \item{fundStrategy}{n x 1 vector of fund strategies.} - - \item{i1,i2}{if ff object is used, the ffapply functions - do apply an EXPRession and provide two indices FROM="i1" - and TO="i2", which mark beginning and end of the batch - and can be used in the applied expression.} -} -\value{ - dataframe with the following columns: Strategy n x 1 - strategy. Net.Asset.value n x 1 net asset values. - Allocation n x 1 vector of asset weights. Mean n x 1 mean - of each funds. Std.Dev n x 1 standard deviation of each - funds. Assets.ES n x 1 vector of asset specific ES - values. cES n x 1 vector of asset specific component ES - values. cES.dollar n x 1 vector of asset specific - component ES values in dollar terms. pcES n x 1 vector of - asset specific percent contribution to ES values. iES n x - 1 vector of asset specific incremental ES values. - iES.dollar n x 1 vector of asset specific component ES - values in dollar terms. mES n x 1 vector of asset - specific marginal ES values. mES.dollar n x 1 vector of - asset specific marginal ES values in dollar terms. -} -\description{ - compute ES report via Cornish-Fisher expansion for - collection of assets in a portfolio given simulated - (bootstrapped) return data. Report format follows that of - Excel VaR report. -} -\examples{ -data(managers.df) -ret.assets = managers.df[,(1:6)] -modifiedEsReport (bootData= ret.assets[,1:3], w=c(1/3,1/3,1/3), delta.w = 0.001, tail.prob = 0.01, - method="derivative",nav=c(100,200,100), nav.p=500, fundStrategy=c("S1","S2","S3")) -} -\author{ - Eric Zivot and Yi-An Chen. -} - Deleted: pkg/FactorAnalytics/man/modifiedIncrementalES.Rd =================================================================== --- pkg/FactorAnalytics/man/modifiedIncrementalES.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/modifiedIncrementalES.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,44 +0,0 @@ -\name{modifiedIncrementalES} -\alias{modifiedIncrementalES} -\title{Compute incremental ES given bootstrap data and portfolio weights.} -\usage{ - modifiedIncrementalES(bootData, w, tail.prob = 0.01, i1, - i2) -} -\arguments{ - \item{bootData}{B x N matrix of B bootstrap returns on n - assets in portfolio.} - - \item{w}{N x 1 vector of portfolio weights} - - \item{tail.prob}{scalar tail probability.} - - \item{i1,i2}{if ff object is used, the ffapply functions - do apply an EXPRession and provide two indices FROM="i1" - and TO="i2", which mark beginning and end of the batch - and can be used in the applied expression.} -} -\value{ - n x 1 matrix of incremental ES values for each asset. -} -\description{ - Compute incremental ES given bootstrap data and portfolio - weights. Incremental ES is defined as the change in - portfolio ES that occurs when an asset is removed from - the portfolio and allocation is spread equally among - remaining assets. VaR used in ES computation is computed - as an estimated quantile using the Cornish-Fisher - expansion. -} -\examples{ -data(managers.df) -ret.assets = managers.df[,(1:6)] -modifiedIncrementalES(ret.assets[,1:3],w=c(1/3,1/3,1/3),tail.prob = 0.05) -} -\author{ - Eric Zivot and Yi-An Chen. -} -\references{ - Jorian, P. (2007). Value-at-Risk, pg. 168. -} - Deleted: pkg/FactorAnalytics/man/modifiedIncrementalVaR.Rd =================================================================== --- pkg/FactorAnalytics/man/modifiedIncrementalVaR.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/modifiedIncrementalVaR.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,43 +0,0 @@ -\name{modifiedIncrementalVaR} -\alias{modifiedIncrementalVaR} -\title{Compute incremental VaR given bootstrap data and portfolio weights.} -\usage{ - modifiedIncrementalVaR(bootData, w, tail.prob = 0.01, i1, - i2) -} -\arguments{ - \item{bootData}{B x N matrix of B bootstrap returns on n - assets in portfolio.} - - \item{w}{N x 1 vector of portfolio weights} - - \item{tail.prob}{scalar tail probability.} - - \item{i1,i2}{if ff object is used, the ffapply functions - do apply an EXPRession and provide two indices FROM="i1" - and TO="i2", which mark beginning and end of the batch - and can be used in the applied expression.} -} -\value{ - n x 1 matrix of incremental VaR values for each asset. -} -\description{ - Compute incremental VaR given bootstrap data and - portfolio weights. Incremental VaR is defined as the - change in portfolio VaR that occurs when an asset is - removed from the portfolio and allocation is spread - equally among remaining assets. VaR is computed as an - estimated quantile using the Cornish-Fisher expansion. -} -\examples{ -data(managers.df) -ret.assets = managers.df[,(1:6)] -modifiedIncrementalVaR(ret.assets[,1:3],w=c(1/3,1/3,1/3),tail.prob = 0.05) -} -\author{ - Eric Zivot and Yi-An Chen. -} -\references{ - Jorian, P. (2007). Value-at-Risk, pg. 168. -} - Deleted: pkg/FactorAnalytics/man/modifiedPortfolioEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/modifiedPortfolioEsDecomposition.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/modifiedPortfolioEsDecomposition.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,54 +0,0 @@ -\name{modifiedPortfolioEsDecomposition} -\alias{modifiedPortfolioEsDecomposition} -\title{Compute portfolio ES (risk) decomposition by assets.} -\usage{ - modifiedPortfolioEsDecomposition(bootData, w, - delta.w = 0.001, tail.prob = 0.01, - method = c("derivative", "average")) -} -\arguments{ - \item{bootData}{B x N matrix of B bootstrap returns on - assets in portfolio.} - - \item{w}{N x 1 vector of portfolio weights} - - \item{delta.w}{Scalar, change in portfolio weight for - computing numerical derivative.} - - \item{tail.prob}{Scalar, tail probability.} - - \item{method}{Character, method for computing marginal - ES. Valid choices are "derivative" for numerical - computation of the derivative of portfolio ES with - respect to fund portfolio weight; "average" for - approximating E[R_i | R_p<=VaR].} -} -\value{ - an S3 list containing -} -\description{ - Compute portfolio ES decomposition given historical or - simulated data and portfolio weights. Marginal ES is - computed either as the numerical derivative of ES with - respect to portfolio weight or as the expected fund - return given portfolio return is less than or equal to - portfolio VaR VaR is compute as an estimated quantile - using the Cornish-Fisher expansion. -} -\examples{ -data(managers.df) -ret.assets = managers.df[,(1:6)] -modifiedPortfolioEsDecomposition(ret.assets[,1:3], w=c(1/3,1/3,1/3), delta.w = 0.001, - tail.prob = 0.01, method=c("derivative")) -} -\author{ - Eric Zivot and Yi-An Chen. -} -\references{ - 1. Hallerback (2003), "Decomposing Portfolio - Value-at-Risk: A General Analysis", The Journal of Risk - 5/2. 2. Yamai and Yoshiba (2002). "Comparative Analyses - of Expected Shortfall and Value-at-Risk: Their Estimation - Error, Decomposition, and Optimization Bank of Japan. -} - Deleted: pkg/FactorAnalytics/man/modifiedPortfolioVaRDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/modifiedPortfolioVaRDecomposition.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/modifiedPortfolioVaRDecomposition.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,57 +0,0 @@ -\name{modifiedPortfolioVaRDecomposition} -\alias{modifiedPortfolioVaRDecomposition} -\title{Compute portfolio VaR decomposition given historical or simulated data and -portfolio weights.} -\usage{ - modifiedPortfolioVaRDecomposition(bootData, w, - delta.w = 0.001, tail.prob = 0.01, - method = c("derivative", "average")) -} -\arguments{ - \item{bootData}{B x N matrix of B bootstrap returns on - assets in portfolio.} - - \item{w}{N x 1 vector of portfolio weights} - - \item{delta.w}{Scalar, change in portfolio weight for - computing numerical derivative.} - - \item{tail.prob}{Scalar, tail probability.} - - \item{method}{Character, method for computing marginal - ES. Valid choices are "derivative" for numerical - computation of the derivative of portfolio ES with - respect to fund portfolio weight; "average" for - approximating E[R_i | R_p =VaR].} -} -\value{ - an S3 list containing -} -\description{ - Compute portfolio VaR decomposition given historical or - simulated data and portfolio weights. The partial - derivative of VaR wrt factor beta is computed as the - expected factor return given fund return is equal to its - VaR and approximated by kernel estimator. VaR is compute - as an estimated quantile using the Cornish-Fisher - expansion. -} -\examples{ -data(managers.df) -ret.assets = managers.df[,(1:6)] -modifiedPortfolioVaRDecomposition(ret.assets[,1:3], w=c(1/3,1/3,1/3), delta.w = 0.001, - tail.prob = 0.01, method=c("average")) -} -\author{ - Eric Zivot and Yi-An Chen. -} -\references{ - 1. Hallerback (2003), "Decomposing Portfolio - Value-at-Risk: A General Analysis", The Journal of Risk - 5/2. 2. Yamai and Yoshiba (2002). "Comparative Analyses - of Expected Shortfall and Value-at-Risk: Their Estimation - Error, Decomposition, and Optimization Bank of Japan. 3. - Epperlein and Smillie (2006) "Cracking VAR with Kernels," - Risk. -} - Deleted: pkg/FactorAnalytics/man/modifiedVaRReport.Rd =================================================================== --- pkg/FactorAnalytics/man/modifiedVaRReport.Rd 2013-08-02 19:07:35 UTC (rev 2701) +++ pkg/FactorAnalytics/man/modifiedVaRReport.Rd 2013-08-02 19:14:57 UTC (rev 2702) @@ -1,73 +0,0 @@ -\name{modifiedVaRReport} -\alias{modifiedVaRReport} -\title{compute VaR report via Cornish-Fisher expansion for collection of assets in -a portfolio given simulated (bootstrapped) return data.} -\usage{ - modifiedVaRReport(bootData, w, delta.w = 0.001, - tail.prob = 0.01, method = c("derivative", "average"), - nav, nav.p, fundStrategy, i1, i2) -} -\arguments{ - \item{bootData}{B x n matrix of B bootstrap returns on - assets in portfolio.} - - \item{w}{n x 1 vector of portfolio weights.} - - \item{delta.w}{scalar, change in portfolio weight for - computing numerical derivative. Default value is 0.010.} - - \item{tail.prob}{scalar tail probability.} - - \item{method}{character, method for computing marginal - VaR Valid choices are "derivative" for numerical - computation of the derivative of portfolio VaR wrt fund - portfolio weight; "average" for approximating E[Ri | Rp - =VaR]} - - \item{nav}{n x 1 vector of net asset values in each - fund.} - - \item{nav.p}{scalar, net asset value of portfolio - percentage.} - - \item{fundStrategy}{n x 1 vector of fund strategies.} - [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2702 From noreply at r-forge.r-project.org Fri Aug 2 21:26:57 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 21:26:57 +0200 (CEST) Subject: [Returnanalytics-commits] r2703 - pkg/FactorAnalytics/data Message-ID: <20130802192658.0D0551852C2@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 21:26:57 +0200 (Fri, 02 Aug 2013) New Revision: 2703 Removed: pkg/FactorAnalytics/data/factorAnalytics-Ex.ps Log: delete unknown item. Deleted: pkg/FactorAnalytics/data/factorAnalytics-Ex.ps =================================================================== --- pkg/FactorAnalytics/data/factorAnalytics-Ex.ps 2013-08-02 19:14:57 UTC (rev 2702) +++ pkg/FactorAnalytics/data/factorAnalytics-Ex.ps 2013-08-02 19:26:57 UTC (rev 2703) @@ -1,481 +0,0 @@ -%!PS-Adobe-3.0 -%%DocumentNeededResources: font Helvetica -%%+ font Helvetica-Bold -%%+ font Helvetica-Oblique -%%+ font Helvetica-BoldOblique -%%+ font Symbol -%%DocumentMedia: a4 595 841 0 () () -%%Title: R Graphics Output -%%Creator: R Software -%%Pages: (atend) -%%Orientation: Landscape -%%BoundingBox: 46 169 550 673 -%%EndComments -%%BeginProlog -/bp { gs 595.00 0 translate 90 rotate gs } def -% begin .ps.prolog -/gs { gsave } def -/gr { grestore } def -/ep { showpage gr gr } def -/m { moveto } def -/l { rlineto } def -/np { newpath } def -/cp { closepath } def -/f { fill } def -/o { stroke } def -/c { newpath 0 360 arc } def -/r { 4 2 roll moveto 1 copy 3 -1 roll exch 0 exch rlineto 0 rlineto -1 mul 0 exch rlineto closepath } def -/p1 { stroke } def -/p2 { gsave bg fill grestore newpath } def -/p3 { gsave bg fill grestore stroke } def -/p6 { gsave bg eofill grestore newpath } def -/p7 { gsave bg eofill grestore stroke } def -/t { 5 -2 roll moveto gsave rotate - 1 index stringwidth pop - mul neg 0 rmoveto show grestore } def -/ta { 4 -2 roll moveto gsave rotate show } def -/tb { 2 -1 roll 0 rmoveto show } def -/cl { grestore gsave newpath 3 index 3 index moveto 1 index - 4 -1 roll lineto exch 1 index lineto lineto - closepath clip newpath } def -/rgb { setrgbcolor } def -/s { scalefont setfont } def -% end .ps.prolog -%%IncludeResource: font Helvetica -/Helvetica findfont -dup length dict begin - {1 index /FID ne {def} {pop pop} ifelse} forall - /Encoding ISOLatin1Encoding def - currentdict - end -/Font1 exch definefont pop -%%IncludeResource: font Helvetica-Bold -/Helvetica-Bold findfont -dup length dict begin - {1 index /FID ne {def} {pop pop} ifelse} forall - /Encoding ISOLatin1Encoding def - currentdict - end -/Font2 exch definefont pop -%%IncludeResource: font Helvetica-Oblique -/Helvetica-Oblique findfont -dup length dict begin - {1 index /FID ne {def} {pop pop} ifelse} forall - /Encoding ISOLatin1Encoding def - currentdict - end -/Font3 exch definefont pop -%%IncludeResource: font Helvetica-BoldOblique -/Helvetica-BoldOblique findfont -dup length dict begin - {1 index /FID ne {def} {pop pop} ifelse} forall - /Encoding ISOLatin1Encoding def - currentdict - end -/Font4 exch definefont pop -%%IncludeResource: font Symbol -/Symbol findfont -dup length dict begin - {1 index /FID ne {def} {pop pop} ifelse} forall - currentdict - end -/Font5 exch definefont pop -%%EndProlog -%%Page: 1 1 -bp -168.94 45.64 672.94 549.64 cl -/Font1 findfont 10 s -0.8549 0.4392 0.8392 rgb -668.62 395.71 (help\("Cor) 90 ta -0.250 (nishFisher"\)) tb gr -168.94 45.64 672.94 549.64 cl -/Font2 findfont 14 s -0 setgray -299.22 515.09 (sim) 0 ta --0.280 (ulation of Cornish Fisher Distrib) tb --0.280 (ution) tb gr -/Font1 findfont 12 s -435.34 64.36 (rc) .5 0 t -181.90 304.84 (Density) .5 90 t -168.94 45.64 672.94 549.64 cl -0 setgray -0.75 setlinewidth -[] 0 setdash -1 setlinecap -1 setlinejoin -10.00 setmiterlimit -np -243.34 119.08 m -384.00 0 l -o -np -243.34 119.08 m -0 -7.20 l -o -np -339.34 119.08 m -0 -7.20 l -o -np -435.34 119.08 m -0 -7.20 l -o -np -531.34 119.08 m -0 -7.20 l -o -np -627.34 119.08 m -0 -7.20 l -o -/Font1 findfont 12 s -243.34 93.16 (-10) .5 0 t -339.34 93.16 (-5) .5 0 t -435.34 93.16 (0) .5 0 t -531.34 93.16 (5) .5 0 t -627.34 93.16 (10) .5 0 t -np -227.98 132.84 m -0 332.36 l -o -np -227.98 132.84 m --7.20 0 l -o -np -227.98 199.31 m --7.20 0 l -o -np -227.98 265.78 m --7.20 0 l -o -np -227.98 332.26 m --7.20 0 l -o -np -227.98 398.73 m --7.20 0 l -o -np -227.98 465.20 m --7.20 0 l -o -210.70 132.84 (0.0) .5 90 t -210.70 199.31 (0.2) .5 90 t -210.70 265.78 (0.4) .5 90 t -210.70 332.26 (0.6) .5 90 t -210.70 398.73 (0.8) .5 90 t -210.70 465.20 (1.0) .5 90 t -227.98 119.08 642.70 490.60 cl -0 setgray -0.75 setlinewidth -[] 0 setdash -1 setlinecap -1 setlinejoin -10.00 setmiterlimit -304.78 132.84 3.84 3.32 r p1 -308.62 132.84 3.84 0.00 r p1 -312.46 132.84 3.84 1.66 r p1 -316.30 132.84 3.84 1.66 r p1 -320.14 132.84 3.84 0.00 r p1 -323.98 132.84 3.84 0.00 r p1 -327.82 132.84 3.84 0.00 r p1 -331.66 132.84 3.84 0.00 r p1 -335.50 132.84 3.84 0.00 r p1 -339.34 132.84 3.84 0.00 r p1 -343.18 132.84 3.84 3.32 r p1 -347.02 132.84 3.84 0.00 r p1 -350.86 132.84 3.84 3.32 r p1 -354.70 132.84 3.84 0.00 r p1 -358.54 132.84 3.84 6.65 r p1 -362.38 132.84 3.84 4.99 r p1 -366.22 132.84 3.84 3.32 r p1 -370.06 132.84 3.84 4.99 r p1 -373.90 132.84 3.84 8.31 r p1 -377.74 132.84 3.84 1.66 r p1 -381.58 132.84 3.84 6.65 r p1 -385.42 132.84 3.84 4.99 r p1 -389.26 132.84 3.84 8.31 r p1 -393.10 132.84 3.84 8.31 r p1 -396.94 132.84 3.84 8.31 r p1 -400.78 132.84 3.84 9.97 r p1 -404.62 132.84 3.84 14.96 r p1 -408.46 132.84 3.84 26.59 r p1 -412.30 132.84 3.84 36.56 r p1 -416.14 132.84 3.84 36.56 r p1 -419.98 132.84 3.84 68.14 r p1 -423.82 132.84 3.84 93.06 r p1 -427.66 132.84 3.84 151.23 r p1 -431.50 132.84 3.84 344.00 r p1 -435.34 132.84 3.84 260.91 r p1 -439.18 132.84 3.84 179.48 r p1 -443.02 132.84 3.84 104.70 r p1 -446.86 132.84 3.84 64.81 r p1 -450.70 132.84 3.84 26.59 r p1 -454.54 132.84 3.84 29.91 r p1 -458.38 132.84 3.84 19.94 r p1 -462.22 132.84 3.84 19.94 r p1 -466.06 132.84 3.84 14.96 r p1 -469.90 132.84 3.84 21.60 r p1 -473.74 132.84 3.84 4.99 r p1 -477.58 132.84 3.84 8.31 r p1 -481.42 132.84 3.84 8.31 r p1 -485.26 132.84 3.84 3.32 r p1 -489.10 132.84 3.84 4.99 r p1 -492.94 132.84 3.84 6.65 r p1 -496.78 132.84 3.84 3.32 r p1 -500.62 132.84 3.84 6.65 r p1 -504.46 132.84 3.84 1.66 r p1 -508.30 132.84 3.84 1.66 r p1 -512.14 132.84 3.84 1.66 r p1 -515.98 132.84 3.84 0.00 r p1 -519.82 132.84 3.84 0.00 r p1 -523.66 132.84 3.84 0.00 r p1 -527.50 132.84 3.84 3.32 r p1 -531.34 132.84 3.84 0.00 r p1 -535.18 132.84 3.84 0.00 r p1 -539.02 132.84 3.84 0.00 r p1 -542.86 132.84 3.84 0.00 r p1 -546.70 132.84 3.84 0.00 r p1 -550.54 132.84 3.84 0.00 r p1 -554.38 132.84 3.84 0.00 r p1 -558.22 132.84 3.84 0.00 r p1 -562.06 132.84 3.84 0.00 r p1 -565.90 132.84 3.84 0.00 r p1 -569.74 132.84 3.84 1.66 r p1 -573.58 132.84 3.84 0.00 r p1 -577.42 132.84 3.84 0.00 r p1 -581.26 132.84 3.84 0.00 r p1 -585.10 132.84 3.84 0.00 r p1 -588.94 132.84 3.84 0.00 r p1 -592.78 132.84 3.84 0.00 r p1 -596.62 132.84 3.84 0.00 r p1 -600.46 132.84 3.84 0.00 r p1 -604.30 132.84 3.84 0.00 r p1 -608.14 132.84 3.84 0.00 r p1 -611.98 132.84 3.84 0.00 r p1 -615.82 132.84 3.84 0.00 r p1 -619.66 132.84 3.84 0.00 r p1 -623.50 132.84 3.84 0.00 r p1 -627.34 132.84 3.84 0.00 r p1 -631.18 132.84 3.84 0.00 r p1 -635.02 132.84 3.84 0.00 r p1 -638.86 132.84 3.84 0.00 r p1 -642.70 132.84 3.84 0.00 r p1 -646.54 132.84 3.84 0.00 r p1 -650.38 132.84 3.84 0.00 r p1 -654.22 132.84 3.84 0.00 r p1 -658.06 132.84 3.84 0.00 r p1 -661.90 132.84 3.84 0.00 r p1 -665.74 132.84 3.84 0.00 r p1 -669.58 132.84 3.84 0.00 r p1 -1 0 0 rgb -np -243.34 132.84 m -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0.01 l -1.92 0 l -1.92 0.01 l -1.92 0.01 l -1.92 0.01 l -1.92 0.02 l -1.92 0.03 l -1.92 0.05 l -1.92 0.06 l -1.92 0.09 l -1.92 0.12 l -1.92 0.16 l -1.92 0.22 l -1.92 0.29 l -1.92 0.39 l -1.92 0.51 l -1.92 0.65 l -1.92 0.83 l -1.92 1.05 l -1.92 1.31 l -1.92 1.62 l -1.92 1.97 l -1.92 2.38 l -1.92 2.83 l -1.92 3.32 l -1.92 3.87 l -1.92 4.43 l -1.92 5.02 l -1.92 5.60 l -1.92 6.19 l -1.92 6.71 l -1.92 7.20 l -1.92 7.58 l -1.92 7.86 l -1.92 8.02 l -1.92 8.02 l -1.92 7.84 l -1.92 7.50 l -1.92 6.97 l -1.92 6.26 l -1.92 5.39 l -1.92 4.36 l -1.92 3.21 l -1.92 1.96 l -435.34 265.43 lineto -1.92 -0.66 l -1.92 -1.96 l -1.92 -3.21 l -1.92 -4.36 l -1.92 -5.39 l -1.92 -6.26 l -1.92 -6.97 l -1.92 -7.50 l -1.92 -7.84 l -1.92 -8.02 l -1.92 -8.02 l -1.92 -7.86 l -1.92 -7.58 l -1.92 -7.20 l -1.92 -6.71 l -1.92 -6.19 l -1.92 -5.60 l -1.92 -5.02 l -1.92 -4.43 l -1.92 -3.87 l -1.92 -3.32 l -1.92 -2.83 l -1.92 -2.38 l -1.92 -1.97 l -1.92 -1.62 l -1.92 -1.31 l -1.92 -1.05 l -1.92 -0.83 l -1.92 -0.65 l -1.92 -0.51 l -1.92 -0.39 l -1.92 -0.29 l -1.92 -0.22 l -1.92 -0.16 l -1.92 -0.12 l -1.92 -0.09 l -1.92 -0.06 l -1.92 -0.05 l -1.92 -0.03 l -1.92 -0.02 l -1.92 -0.01 l -1.92 -0.01 l -1.92 -0.01 l -1.92 0 l -1.92 -0.01 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -1.92 0 l -627.34 132.84 lineto -o -ep -%%Trailer -%%Pages: 1 -%%EOF From noreply at r-forge.r-project.org Fri Aug 2 23:32:20 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 23:32:20 +0200 (CEST) Subject: [Returnanalytics-commits] r2704 - in pkg/FactorAnalytics: . R data man Message-ID: <20130802213220.D5D6E18575A@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 23:32:20 +0200 (Fri, 02 Aug 2013) New Revision: 2704 Removed: pkg/FactorAnalytics/data/CRSP.RDATA pkg/FactorAnalytics/data/CommomFactors.RData Modified: pkg/FactorAnalytics/NAMESPACE pkg/FactorAnalytics/R/factorModelCovariance.r pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/R/factorModelSdDecomposition.R pkg/FactorAnalytics/R/factorModelVaRDecomposition.R pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/R/fitStatisticalFactorModel.R pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r pkg/FactorAnalytics/R/print.TimeSeriesFactorModel.r pkg/FactorAnalytics/data/ pkg/FactorAnalytics/man/factorModelCovariance.Rd pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd pkg/FactorAnalytics/man/fitStatisticalFactorModel.Rd pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/print.TimeSeriesFactorModel.Rd Log: 1. debug function arguments in Rd file do not match function itself. 2. delete CRSP.RDATA which is a duplicate of stock.RData Modified: pkg/FactorAnalytics/NAMESPACE =================================================================== --- pkg/FactorAnalytics/NAMESPACE 2013-08-02 19:26:57 UTC (rev 2703) +++ pkg/FactorAnalytics/NAMESPACE 2013-08-02 21:32:20 UTC (rev 2704) @@ -1,5 +1,11 @@ export(dCornishFisher) +export(factorModelCovariance) +export(factorModelEsDecomposition) export(factorModelMonteCarlo) +export(factorModelSdDecomposition) +export(factorModelVaRDecomposition) +export(fitFundamentalFactorModel) +export(fitStatisticalFactorModel) export(fitTimeSeriesFactorModel) export(pCornishFisher) export(qCornishFisher) Modified: pkg/FactorAnalytics/R/factorModelCovariance.r =================================================================== --- pkg/FactorAnalytics/R/factorModelCovariance.r 2013-08-02 19:26:57 UTC (rev 2703) +++ pkg/FactorAnalytics/R/factorModelCovariance.r 2013-08-02 21:32:20 UTC (rev 2704) @@ -21,71 +21,63 @@ #' @author Eric Zivot and Yi-An Chen. #' @references Zivot, E. and J. Wang (2006), \emph{Modeling Financial Time #' Series with S-PLUS, Second Edition}, Springer-Verlag. +#' @export #' @examples -#' +#' \dontrun{ #' # Time Series model #' #' data(managers.df) #' factors = managers.df[,(7:9)] -#' fit <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +#' fit <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), #' factors.names=c("EDHEC.LS.EQ","SP500.TR"), #' data=managers.df,fit.method="OLS") +#' factors = managers.df[,(7:8)] #' factorModelCovariance(fit$beta,var(factors),fit$resid.variance) #' #' # Statistical Model #' data(stat.fm.data) -#' fit <- fitStatisticalFactorModel(sfm.dat,k=2, -#' ckeckData.method="data.frame") +#' sfm.pca.fit <- fitStatisticalFactorModel(sfm.dat,k=2) +#' #' factorModelCovariance(t(sfm.pca.fit$loadings),var(sfm.pca.fit$factors),sfm.pca.fit$resid.variance) #' -#' factorModelCovariance(t(sfm.pca.fit$loadings),var(sfm.pca.fit$factors),sfm.pca.fit$resid.variance) +#' sfm.apca.fit <- fitStatisticalFactorModel(sfm.apca.dat,k=2) #' -#' sfm.apca.fit <- fitStatisticalFactorModel(sfm.apca.dat,k=2 -#' ,ckeckData.method="data.frame") -#' #' factorModelCovariance(t(sfm.apca.fit$loadings), #' var(sfm.apca.fit$factors),sfm.apca.fit$resid.variance) #' #' # fundamental factor model example -#' -#' +#' #' #' data(stock) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") -#' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, -#' datevar = "DATE", returnsvar = "RETURN", -#' assetvar = "TICKER", wls = TRUE, -#' regression = "classic", -#' covariance = "classic", full.resid.cov = FALSE, -#' robust.scale = TRUE) -#' -#' # compute return covariance -#' # take beta as latest date input -#' beta.mat.fundm <- subset(data,DATE == "2003-12-31")[,exposure.names] -#' beta.mat.fundm <- cbind(rep(1,447),beta.mat.fundm) # add intercept -#' FM return covariance -#' ret.cov.fundm <- factorModelCovariance(beta.mat.fundm,test.fit$factor.cov$cov, -#' test.fit$resid.variance) -#' # the result is exactly the same -#' test.fit$returns.cov$cov == ret.cov.fundm -#' +#' beta.mat <- subset(stock,DATE == "2003-12-31")[,exposure.names] +#' beta.mat1 <- cbind(rep(1,447),beta.mat1) +# FM return covariance +#' fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") +#' , data=stock,returnsvar = "RETURN",datevar = "DATE", +#' assetvar = "TICKER", +#' wls = TRUE, regression = "classic", +#' covariance = "classic", full.resid.cov = FALSE) +#' ret.cov.fundm <- factorModelCovariance(beta.mat1,fit.fund$factor.cov$cov,fit.fund$resid.variance) +#' fit.fund$returns.cov$cov == ret.cov.fundm +#' } factorModelCovariance <- -function(beta.mat, factor.cov, residVars.vec) { +function(beta, factor.cov, resid.variance) { - beta.mat = as.matrix(beta.mat) + beta = as.matrix(beta) factor.cov = as.matrix(factor.cov) - sig.e = as.vector(residVars.vec) + sig.e = as.vector(resid.variance) if (length(sig.e) > 1) { D.e = diag(as.vector(sig.e)) } else { D.e = as.matrix(sig.e) } - if (ncol(beta.mat) != ncol(factor.cov)) - stop("beta.mat and factor.cov must have same number of columns") + if (ncol(beta) != ncol(factor.cov)) + stop("beta and factor.cov must have same number of columns") - if (nrow(D.e) != nrow(beta.mat)) - stop("beta.mat and D.e must have same number of rows") - cov.fm = beta.mat %*% factor.cov %*% t(beta.mat) + D.e + if (nrow(D.e) != nrow(beta)) + stop("beta and D.e must have same number of rows") + cov.fm = beta %*% factor.cov %*% t(beta) + D.e if (any(diag(chol(cov.fm)) == 0)) warning("Covariance matrix is not positive definite") return(cov.fm) Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 19:26:57 UTC (rev 2703) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 21:32:20 UTC (rev 2704) @@ -68,8 +68,8 @@ #' fit.fund$beta["STI",], #' fit.fund$resid.variance["STI"], tail.prob=0.05) #' +#' @export #' -#' factorModelEsDecomposition <- function(Data, beta.vec, sig2.e, tail.prob = 0.05) { ## Compute factor model factor ES decomposition based on Euler's theorem given historic Modified: pkg/FactorAnalytics/R/factorModelSdDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelSdDecomposition.R 2013-08-02 19:26:57 UTC (rev 2703) +++ pkg/FactorAnalytics/R/factorModelSdDecomposition.R 2013-08-02 21:32:20 UTC (rev 2704) @@ -1,89 +1,89 @@ -#' Compute factor model factor risk (sd) decomposition for individual fund. -#' -#' Compute factor model factor risk (sd) decomposition for individual fund. -#' -#' -#' @param beta.vec k x 1 vector of factor betas with factor names in the -#' rownames. -#' @param factor.cov k x k factor excess return covariance matrix. -#' @param sig2.e scalar, residual variance from factor model. -#' @return an S3 object containing -#' @returnItem sd.fm Scalar, std dev based on factor model. -#' @returnItem mcr.fm (K+1) x 1 vector of factor marginal contributions to risk -#' (sd). -#' @returnItem cr.fm (K+1) x 1 vector of factor component contributions to risk -#' (sd). -#' @returnItem pcr.fm (K+1) x 1 vector of factor percent contributions to risk -#' (sd). -#' @author Eric Zivot and Yi-An Chen -#' @examples -#' -#' # load data from the database -#' data(managers.df) -#' ret.assets = managers.df[,(1:6)] -#' factors = managers.df[,(7:9)] -#' # fit the factor model with OLS -#' fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", -#' variable.selection="all subsets", -#' factor.set = 3) -#' # factor SD decomposition for HAM1 -#' cov.factors = var(factors) -#' manager.names = colnames(managers.df[,(1:6)]) -#' factor.names = colnames(managers.df[,(7:9)]) -#' factor.sd.decomp.HAM1 = factorModelSdDecomposition(fit$beta.mat["HAM1",], -#' cov.factors, fit$residVars.vec["HAM1"]) -#' -#' -#' -factorModelSdDecomposition <- -function(beta.vec, factor.cov, sig2.e) { -## Inputs: -## beta k x 1 vector of factor betas with factor names in the rownames -## factor.cov k x k factor excess return covariance matrix -## sig2.e scalar, residual variance from factor model (residVars.vec in fitFundamentalFactorModel) -## Output: -## A list with the following components: -## sd.fm scalar, std dev based on factor model -## mcr.fm k+1 x 1 vector of factor marginal contributions to risk (sd) -## cr.fm k+1 x 1 vector of factor component contributions to risk (sd) -## pcr.fm k+1 x 1 vector of factor percent contributions to risk (sd) -## Remarks: -## The factor model has the form -## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) -## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' -## By Euler's theorem -## sd.fm = sum(cr.fm) = sum(beta*mcr.fm) - if(is.matrix(beta.vec)) { - beta.names = c(rownames(beta.vec), "residual") - } else if(is.vector(beta.vec)) { - beta.names = c(names(beta.vec), "residual") - } else { - stop("beta.vec is not a matrix or a vector") - } - beta.vec = as.vector(beta.vec) - beta.star.vec = c(beta.vec, sqrt(sig2.e)) - names(beta.star.vec) = beta.names - factor.cov = as.matrix(factor.cov) - k.star = length(beta.star.vec) - k = k.star - 1 - factor.star.cov = diag(k.star) - factor.star.cov[1:k, 1:k] = factor.cov - -## compute factor model sd - sd.fm = as.numeric(sqrt(t(beta.star.vec) %*% factor.star.cov %*% beta.star.vec)) -## compute marginal and component contributions to sd - mcr.fm = (factor.star.cov %*% beta.star.vec)/sd.fm - cr.fm = mcr.fm * beta.star.vec - pcr.fm = cr.fm/sd.fm - rownames(mcr.fm) <- rownames(cr.fm) <- rownames(pcr.fm) <- beta.names - colnames(mcr.fm) = "MCR" - colnames(cr.fm) = "CR" - colnames(pcr.fm) = "PCR" -## return results - ans = list(sd.fm = sd.fm, - mcr.fm = t(mcr.fm), - cr.fm = t(cr.fm), - pcr.fm = t(pcr.fm)) - return(ans) -} - +#' Compute factor model factor risk (sd) decomposition for individual fund. +#' +#' Compute factor model factor risk (sd) decomposition for individual fund. +#' +#' +#' @param beta.vec k x 1 vector of factor betas with factor names in the +#' rownames. +#' @param factor.cov k x k factor excess return covariance matrix. +#' @param sig2.e scalar, residual variance from factor model. +#' @return an S3 object containing +#' @returnItem sd.fm Scalar, std dev based on factor model. +#' @returnItem mcr.fm (K+1) x 1 vector of factor marginal contributions to risk +#' (sd). +#' @returnItem cr.fm (K+1) x 1 vector of factor component contributions to risk +#' (sd). +#' @returnItem pcr.fm (K+1) x 1 vector of factor percent contributions to risk +#' (sd). +#' @author Eric Zivot and Yi-An Chen +#' @examples +#' +#' # load data from the database +#' data(managers.df) +#' ret.assets = managers.df[,(1:6)] +#' factors = managers.df[,(7:9)] +#' # fit the factor model with OLS +#' fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", +#' variable.selection="all subsets", +#' factor.set = 3) +#' # factor SD decomposition for HAM1 +#' cov.factors = var(factors) +#' manager.names = colnames(managers.df[,(1:6)]) +#' factor.names = colnames(managers.df[,(7:9)]) +#' factor.sd.decomp.HAM1 = factorModelSdDecomposition(fit$beta.mat["HAM1",], +#' cov.factors, fit$residVars.vec["HAM1"]) +#' +#' @export +#' +factorModelSdDecomposition <- +function(beta.vec, factor.cov, sig2.e) { +## Inputs: +## beta k x 1 vector of factor betas with factor names in the rownames +## factor.cov k x k factor excess return covariance matrix +## sig2.e scalar, residual variance from factor model (residVars.vec in fitFundamentalFactorModel) +## Output: +## A list with the following components: +## sd.fm scalar, std dev based on factor model +## mcr.fm k+1 x 1 vector of factor marginal contributions to risk (sd) +## cr.fm k+1 x 1 vector of factor component contributions to risk (sd) +## pcr.fm k+1 x 1 vector of factor percent contributions to risk (sd) +## Remarks: +## The factor model has the form +## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) +## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' +## By Euler's theorem +## sd.fm = sum(cr.fm) = sum(beta*mcr.fm) + if(is.matrix(beta.vec)) { + beta.names = c(rownames(beta.vec), "residual") + } else if(is.vector(beta.vec)) { + beta.names = c(names(beta.vec), "residual") + } else { + stop("beta.vec is not a matrix or a vector") + } + beta.vec = as.vector(beta.vec) + beta.star.vec = c(beta.vec, sqrt(sig2.e)) + names(beta.star.vec) = beta.names + factor.cov = as.matrix(factor.cov) + k.star = length(beta.star.vec) + k = k.star - 1 + factor.star.cov = diag(k.star) + factor.star.cov[1:k, 1:k] = factor.cov + +## compute factor model sd + sd.fm = as.numeric(sqrt(t(beta.star.vec) %*% factor.star.cov %*% beta.star.vec)) +## compute marginal and component contributions to sd + mcr.fm = (factor.star.cov %*% beta.star.vec)/sd.fm + cr.fm = mcr.fm * beta.star.vec + pcr.fm = cr.fm/sd.fm + rownames(mcr.fm) <- rownames(cr.fm) <- rownames(pcr.fm) <- beta.names + colnames(mcr.fm) = "MCR" + colnames(cr.fm) = "CR" + colnames(pcr.fm) = "PCR" +## return results + ans = list(sd.fm = sd.fm, + mcr.fm = t(mcr.fm), + cr.fm = t(cr.fm), + pcr.fm = t(pcr.fm)) + return(ans) +} + Modified: pkg/FactorAnalytics/R/factorModelVaRDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-02 19:26:57 UTC (rev 2703) +++ pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-02 21:32:20 UTC (rev 2704) @@ -1,161 +1,161 @@ -#' Compute factor model factor VaR decomposition -#' -#' Compute factor model factor VaR decomposition based on Euler's theorem given -#' historic or simulated data and factor model parameters. The partial -#' derivative of VaR wrt factor beta is computed as the expected factor return -#' given fund return is equal to its VaR and approximated by kernel estimator. -#' VaR is compute either as the sample quantile or as an estimated quantile -#' using the Cornish-Fisher expansion. -#' -#' The factor model has the form R(t) = beta'F(t) + e(t) = beta.star'F.star(t) -#' where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' By Euler's -#' theorem VaR.fm = sum(cVaR.fm) = sum(beta.star*mVaR.fm) -#' -#' @param bootData B x (k+2) matrix of bootstrap data. First column contains -#' the fund returns, second through k+1 columns contain factor returns, (k+2)nd -#' column contain residuals scaled to have unit variance . -#' @param beta.vec k x 1 vector of factor betas. -#' @param sig2.e scalar, residual variance from factor model. -#' @param tail.prob scalar, tail probability -#' @param VaR.method character, method for computing VaR. Valid choices are -#' "HS" for historical simulation (empirical quantile); "CornishFisher" for -#' modified VaR based on Cornish-Fisher quantile estimate. Cornish-Fisher -#' computation is done with the VaR.CornishFisher in the PerformanceAnalytics -#' package. -#' @return an S3 object containing -#' @returnItem VaR.fm Scalar, bootstrap VaR value for fund reported as a -#' positive number. -#' @returnItem n.exceed Scalar, number of observations beyond VaR. -#' @returnItem idx.exceed n.exceed x 1 vector giving index values of -#' exceedences. -#' @returnItem mVaR.fm (K+1) x 1 vector of factor marginal contributions to -#' VaR. -#' @returnItem cVaR.fm (K+1) x 1 vector of factor component contributions to -#' VaR. -#' @returnItem pcVaR.fm (K+1) x 1 vector of factor percent contributions to -#' VaR. -#' @author Eric Zivot and Yi-An Chen -#' @references 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A -#' General Analysis", The Journal of Risk 5/2. 2. Yamai and Yoshiba (2002). -#' "Comparative Analyses of Expected Shortfall and Value-at-Risk: Their -#' Estimation Error, Decomposition, and Optimization Bank of Japan. 3. Meucci -#' (2007). "Risk Contributions from Generic User-Defined Factors," Risk. 4. -#' Epperlein and Smillie (2006) "Cracking VAR with Kernels," Risk. -#' @examples -#' -#' data(managers.df) -#' ret.assets = managers.df[,(1:6)] -#' factors = managers.df[,(7:9)] -#' # fit the factor model with OLS -#' fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", -#' variable.selection="all subsets", -#' factor.set = 3) -#' -#' residualData=as.matrix(fit$residVars.vec,1,6) -#' bootData <- factorModelMonteCarlo(n.boot=100, factors ,fit$beta.mat, residual.dist="normal", -#' residualData, Alpha.mat=NULL, boot.method="random", -#' seed = 123, return.factors = "TRUE", return.residuals = "TRUE") -#' -#' # compute risk factor contribution to VaR using bootstrap data -#' # combine fund returns, factor returns and residual returns for HAM1 -#' tmpData = cbind(bootData$returns[,1], bootData$factors, -#' bootData$residuals[,1]/sqrt(fit$residVars.vec[1])) -#' colnames(tmpData)[c(1,5)] = c("HAM1", "residual") -#' factor.VaR.decomp.HAM1 <- factorModelVaRDecomposition(tmpData, fit$beta.mat[1,], -#' fit$residVars.vec[1], tail.prob=0.05,VaR.method="HS") -#' -#' -factorModelVaRDecomposition <- -function(bootData, beta.vec, sig2.e, tail.prob = 0.01, - VaR.method=c("HS", "CornishFisher")) { -## Compute factor model factor VaR decomposition based on Euler's theorem given historic -## or simulated data and factor model parameters. -## The partial derivative of VaR wrt factor beta is computed -## as the expected factor return given fund return is equal to its VaR and approximated by kernel estimator. -## VaR is compute either as the sample quantile or as an estimated quantile -## using the Cornish-Fisher expansion. -## inputs: -## bootData B x (k+2) matrix of bootstrap data. First column contains the fund returns, -## second through k+1 columns contain factor returns, (k+2)nd column contain residuals -## scaled to have variance 1. -## beta.vec k x 1 vector of factor betas -## sig2.e scalar, residual variance from factor model -## tail.prob scalar tail probability -## method character, method for computing marginal ES. Valid choices are -## "average" for approximating E[Fj | R=VaR] -## VaR.method character, method for computing VaR. Valid choices are "HS" for -## historical simulation (empirical quantile); "CornishFisher" for -## modified VaR based on Cornish-Fisher quantile estimate. Cornish-Fisher -## computation is done with the VaR.CornishFisher in the PerformanceAnalytics -## package -## output: -## A list with the following components: -## VaR.fm scalar, bootstrap VaR value for fund reported as a positive number -## n.exceed scalar, number of observations beyond VaR -## idx.exceed n.exceed x 1 vector giving index values of exceedences -## mcES.fm k+1 x 1 vector of factor marginal contributions to ES -## cES.fm k+1 x 1 vector of factor component contributions to ES -## pcES.fm k+1 x 1 vector of factor percent contributions to ES -## Remarks: -## The factor model has the form -## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) -## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' -## By Euler's theorem -## ES.fm = sum(cES.fm) = sum(beta.star*mcES.fm) -## References: -## 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A General Analysis", -## The Journal of Risk 5/2. -## 2. Yamai and Yoshiba (2002). "Comparative Analyses of Expected Shortfall and -## Value-at-Risk: Their Estimation Error, Decomposition, and Optimization -## Bank of Japan. -## 3. Meucci (2007). "Risk Contributions from Generic User-Defined Factors," Risk. - - -require(PerformanceAnalytics) - VaR.method = VaR.method[1] - bootData = as.matrix(bootData) - ncol.bootData = ncol(bootData) - if(is.matrix(beta.vec)) { - beta.names = c(rownames(beta.vec), "residual") - } else if(is.vector(beta.vec)) { - beta.names = c(names(beta.vec), "residual") - } else { - stop("beta.vec is not an n x 1 matrix or a vector") - } - beta.names = c(names(beta.vec), "residual") - beta.star.vec = c(beta.vec, sqrt(sig2.e)) - names(beta.star.vec) = beta.names - - ## epsilon is calculated in the sense of minimizing mean square error by Silverman 1986 - epi <- 2.575*sd(bootData[,1]) * (nrow(bootData)^(-1/5)) - if (VaR.method == "HS") { - VaR.fm = quantile(bootData[, 1], prob=tail.prob) - idx = which(bootData[, 1] <= VaR.fm + epi & bootData[,1] >= VaR.fm - epi) - } else { - VaR.fm = as.numeric(VaR(bootData[, 1], p=(1-tail.prob),method="modified")) - idx = which(bootData[, 1] <= VaR.fm + epi & bootData[,1] >= VaR.fm - epi) - } - ## - ## compute marginal contribution to VaR - ## - ## compute marginal VaR as expected value of factor return given - ## triangler kernel - mVaR.fm = -as.matrix(colMeans(bootData[idx, -1])) - -## compute correction factor so that sum of weighted marginal VaR adds to portfolio VaR -cf = as.numeric( -VaR.fm / sum(mVaR.fm*beta.star.vec) ) -mVaR.fm = cf*mVaR.fm -cVaR.fm = mVaR.fm*beta.star.vec -pcVaR.fm = cVaR.fm/-VaR.fm -colnames(mVaR.fm) = "MVaR" -colnames(cVaR.fm) = "CVaR" -colnames(pcVaR.fm) = "PCVaR" -ans = list(VaR.fm = -VaR.fm, - n.exceed = length(idx), - idx.exceed = idx, - mVaR.fm = t(mVaR.fm), - cVaR.fm = t(cVaR.fm), - pcVaR.fm = t(pcVaR.fm)) -return(ans) -} - +#' Compute factor model factor VaR decomposition +#' +#' Compute factor model factor VaR decomposition based on Euler's theorem given +#' historic or simulated data and factor model parameters. The partial +#' derivative of VaR wrt factor beta is computed as the expected factor return +#' given fund return is equal to its VaR and approximated by kernel estimator. +#' VaR is compute either as the sample quantile or as an estimated quantile +#' using the Cornish-Fisher expansion. +#' +#' The factor model has the form R(t) = beta'F(t) + e(t) = beta.star'F.star(t) +#' where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' By Euler's +#' theorem VaR.fm = sum(cVaR.fm) = sum(beta.star*mVaR.fm) +#' +#' @param bootData B x (k+2) matrix of bootstrap data. First column contains +#' the fund returns, second through k+1 columns contain factor returns, (k+2)nd +#' column contain residuals scaled to have unit variance . +#' @param beta.vec k x 1 vector of factor betas. +#' @param sig2.e scalar, residual variance from factor model. +#' @param tail.prob scalar, tail probability +#' @param VaR.method character, method for computing VaR. Valid choices are +#' "HS" for historical simulation (empirical quantile); "CornishFisher" for +#' modified VaR based on Cornish-Fisher quantile estimate. Cornish-Fisher +#' computation is done with the VaR.CornishFisher in the PerformanceAnalytics +#' package. +#' @return an S3 object containing +#' @returnItem VaR.fm Scalar, bootstrap VaR value for fund reported as a +#' positive number. +#' @returnItem n.exceed Scalar, number of observations beyond VaR. +#' @returnItem idx.exceed n.exceed x 1 vector giving index values of +#' exceedences. +#' @returnItem mVaR.fm (K+1) x 1 vector of factor marginal contributions to +#' VaR. +#' @returnItem cVaR.fm (K+1) x 1 vector of factor component contributions to +#' VaR. +#' @returnItem pcVaR.fm (K+1) x 1 vector of factor percent contributions to +#' VaR. +#' @author Eric Zivot and Yi-An Chen +#' @references 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A +#' General Analysis", The Journal of Risk 5/2. 2. Yamai and Yoshiba (2002). +#' "Comparative Analyses of Expected Shortfall and Value-at-Risk: Their +#' Estimation Error, Decomposition, and Optimization Bank of Japan. 3. Meucci +#' (2007). "Risk Contributions from Generic User-Defined Factors," Risk. 4. +#' Epperlein and Smillie (2006) "Cracking VAR with Kernels," Risk. +#' @examples +#' +#' data(managers.df) +#' ret.assets = managers.df[,(1:6)] +#' factors = managers.df[,(7:9)] +#' # fit the factor model with OLS +#' fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", +#' variable.selection="all subsets", +#' factor.set = 3) +#' +#' residualData=as.matrix(fit$residVars.vec,1,6) +#' bootData <- factorModelMonteCarlo(n.boot=100, factors ,fit$beta.mat, residual.dist="normal", +#' residualData, Alpha.mat=NULL, boot.method="random", +#' seed = 123, return.factors = "TRUE", return.residuals = "TRUE") +#' +#' # compute risk factor contribution to VaR using bootstrap data +#' # combine fund returns, factor returns and residual returns for HAM1 +#' tmpData = cbind(bootData$returns[,1], bootData$factors, +#' bootData$residuals[,1]/sqrt(fit$residVars.vec[1])) +#' colnames(tmpData)[c(1,5)] = c("HAM1", "residual") +#' factor.VaR.decomp.HAM1 <- factorModelVaRDecomposition(tmpData, fit$beta.mat[1,], +#' fit$residVars.vec[1], tail.prob=0.05,VaR.method="HS") +#' +#' @export +factorModelVaRDecomposition <- +function(bootData, beta.vec, sig2.e, tail.prob = 0.01, + VaR.method=c("HS", "CornishFisher")) { +## Compute factor model factor VaR decomposition based on Euler's theorem given historic +## or simulated data and factor model parameters. +## The partial derivative of VaR wrt factor beta is computed +## as the expected factor return given fund return is equal to its VaR and approximated by kernel estimator. +## VaR is compute either as the sample quantile or as an estimated quantile +## using the Cornish-Fisher expansion. +## inputs: +## bootData B x (k+2) matrix of bootstrap data. First column contains the fund returns, +## second through k+1 columns contain factor returns, (k+2)nd column contain residuals +## scaled to have variance 1. +## beta.vec k x 1 vector of factor betas +## sig2.e scalar, residual variance from factor model +## tail.prob scalar tail probability +## method character, method for computing marginal ES. Valid choices are +## "average" for approximating E[Fj | R=VaR] +## VaR.method character, method for computing VaR. Valid choices are "HS" for +## historical simulation (empirical quantile); "CornishFisher" for +## modified VaR based on Cornish-Fisher quantile estimate. Cornish-Fisher +## computation is done with the VaR.CornishFisher in the PerformanceAnalytics +## package +## output: +## A list with the following components: +## VaR.fm scalar, bootstrap VaR value for fund reported as a positive number +## n.exceed scalar, number of observations beyond VaR +## idx.exceed n.exceed x 1 vector giving index values of exceedences +## mcES.fm k+1 x 1 vector of factor marginal contributions to ES +## cES.fm k+1 x 1 vector of factor component contributions to ES +## pcES.fm k+1 x 1 vector of factor percent contributions to ES +## Remarks: +## The factor model has the form +## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) +## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' +## By Euler's theorem +## ES.fm = sum(cES.fm) = sum(beta.star*mcES.fm) +## References: +## 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A General Analysis", +## The Journal of Risk 5/2. +## 2. Yamai and Yoshiba (2002). "Comparative Analyses of Expected Shortfall and +## Value-at-Risk: Their Estimation Error, Decomposition, and Optimization +## Bank of Japan. +## 3. Meucci (2007). "Risk Contributions from Generic User-Defined Factors," Risk. + + +require(PerformanceAnalytics) + VaR.method = VaR.method[1] + bootData = as.matrix(bootData) + ncol.bootData = ncol(bootData) + if(is.matrix(beta.vec)) { + beta.names = c(rownames(beta.vec), "residual") + } else if(is.vector(beta.vec)) { + beta.names = c(names(beta.vec), "residual") + } else { + stop("beta.vec is not an n x 1 matrix or a vector") + } + beta.names = c(names(beta.vec), "residual") + beta.star.vec = c(beta.vec, sqrt(sig2.e)) + names(beta.star.vec) = beta.names + + ## epsilon is calculated in the sense of minimizing mean square error by Silverman 1986 + epi <- 2.575*sd(bootData[,1]) * (nrow(bootData)^(-1/5)) + if (VaR.method == "HS") { + VaR.fm = quantile(bootData[, 1], prob=tail.prob) + idx = which(bootData[, 1] <= VaR.fm + epi & bootData[,1] >= VaR.fm - epi) + } else { + VaR.fm = as.numeric(VaR(bootData[, 1], p=(1-tail.prob),method="modified")) + idx = which(bootData[, 1] <= VaR.fm + epi & bootData[,1] >= VaR.fm - epi) + } + ## + ## compute marginal contribution to VaR + ## + ## compute marginal VaR as expected value of factor return given + ## triangler kernel + mVaR.fm = -as.matrix(colMeans(bootData[idx, -1])) + +## compute correction factor so that sum of weighted marginal VaR adds to portfolio VaR +cf = as.numeric( -VaR.fm / sum(mVaR.fm*beta.star.vec) ) +mVaR.fm = cf*mVaR.fm +cVaR.fm = mVaR.fm*beta.star.vec +pcVaR.fm = cVaR.fm/-VaR.fm +colnames(mVaR.fm) = "MVaR" +colnames(cVaR.fm) = "CVaR" +colnames(pcVaR.fm) = "PCVaR" +ans = list(VaR.fm = -VaR.fm, + n.exceed = length(idx), + idx.exceed = idx, + mVaR.fm = t(mVaR.fm), + cVaR.fm = t(cVaR.fm), + pcVaR.fm = t(pcVaR.fm)) +return(ans) +} + Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-02 19:26:57 UTC (rev 2703) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-02 21:32:20 UTC (rev 2704) @@ -35,8 +35,6 @@ #' the data. #' @param assetvar A character string giving the name of the asset variable in #' the data. -#' @param exposure.names A character string giving the name of the exposure variable in -#' the data. #' @return an S3 object containing #' \itemize{ #' \item returns.cov A "list" object contains covariance information for @@ -58,6 +56,8 @@ #' \item tstats A "xts" object containing the time series of t-statistics #' for each exposure. #' \item call function call +#' \item exposure.names A character string giving the name of the exposure variable in +#' the data. #' } #' @author Guy Yullen and Yi-An Chen #' @examples @@ -109,7 +109,7 @@ #' #' #' } -#' +#' @export fitFundamentalFactorModel <- Modified: pkg/FactorAnalytics/R/fitStatisticalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitStatisticalFactorModel.R 2013-08-02 19:26:57 UTC (rev 2703) +++ pkg/FactorAnalytics/R/fitStatisticalFactorModel.R 2013-08-02 21:32:20 UTC (rev 2704) @@ -16,6 +16,8 @@ #' considered. #' @param sig significant level when ck method uses. #' @param na.rm if allow missing values. Default is FALSE. +#' +#' #' @return #' \itemize{ #' \item{factors}{T x K the estimated factors.} @@ -75,9 +77,10 @@ #' names(sfm.apca.fit.ck) #' sfm.apca.fit.ck$mimic #' +#' @export +#' fitStatisticalFactorModel <- -function(data, k = 1, refine = TRUE, check = FALSE, max.k = NULL, sig = 0.05, na.rm = FALSE, - ckeckData.method = "xts" ){ +function(data, k = 1, refine = TRUE, check = FALSE, max.k = NULL, sig = 0.05, na.rm = FALSE){ # load package require(MASS) @@ -226,15 +229,15 @@ dimnames(ret.cov) <- list(data.names, data.names) names(alpha) <- data.names - if (ckeckData.method == "xts" | ckeckData.method == "zoo" ) { +# if (ckeckData.method == "xts" | ckeckData.method == "zoo" ) { f <- xts(f,index(data.xts)) resid <- xts(resid,index(data.xts)) - } +# } # create lm list for plot reg.list = list() - if (ckeckData.method == "xts" | ckeckData.method == "zoo" ) { +# if (ckeckData.method == "xts" | ckeckData.method == "zoo" ) { for (i in data.names) { reg.xts = merge(data.xts[,i],f) colnames(reg.xts)[1] <- i @@ -242,15 +245,15 @@ fm.fit = lm(fm.formula, data=reg.xts) reg.list[[i]] = fm.fit } - } else { - for (i in data.names) { - reg.df = as.data.frame(cbind(data[,i],coredata(f))) - colnames(reg.df)[1] <- i - fm.formula = as.formula(paste(i,"~", ".", sep=" ")) - fm.fit = lm(fm.formula, data=reg.df) - reg.list[[i]] = fm.fit - } - } +# } else { +# for (i in data.names) { +# reg.df = as.data.frame(cbind(data[,i],coredata(f))) +# colnames(reg.df)[1] <- i +# fm.formula = as.formula(paste(i,"~", ".", sep=" ")) +# fm.fit = lm(fm.formula, data=reg.df) +# reg.list[[i]] = fm.fit +# } +# } ans <- list(factors = f, loadings = B, k = k, alpha = alpha, ret.cov = ret.cov, r2 = r2, eigen = eigen.tmp$values, residuals=resid, asset.ret = data, @@ -305,14 +308,14 @@ resid <- t(t(data) - alpha) - f %*% B r2 <- (1 - colSums(resid^2)/colSums(xc^2)) [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2704 From noreply at r-forge.r-project.org Fri Aug 2 23:34:21 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 2 Aug 2013 23:34:21 +0200 (CEST) Subject: [Returnanalytics-commits] r2705 - in pkg/FactorAnalytics: R data man Message-ID: <20130802213421.9137918575A@r-forge.r-project.org> Author: chenyian Date: 2013-08-02 23:34:21 +0200 (Fri, 02 Aug 2013) New Revision: 2705 Added: pkg/FactorAnalytics/R/rCornishFisher.R pkg/FactorAnalytics/data/CommonFactors.RData pkg/FactorAnalytics/man/CommonFactors.Rd pkg/FactorAnalytics/man/rCornishFisher.Rd Log: 1. add CommonFactors.RData 2. add rCornishFisher.R which generates random variables from Cornish-Fisher distribution. Added: pkg/FactorAnalytics/R/rCornishFisher.R =================================================================== --- pkg/FactorAnalytics/R/rCornishFisher.R (rev 0) +++ pkg/FactorAnalytics/R/rCornishFisher.R 2013-08-02 21:34:21 UTC (rev 2705) @@ -0,0 +1,85 @@ +#' Functions for Cornish-Fisher density, CDF, random number simulation and +#' quantile. +#' +#'@aliases rCornishFisher +#'@aliases dCornishFisher +#'@aliases qCornishFisher +#'@aliases pCornishFisher +#' +#' +#'@description +#' \itemize{ +#' \item \code{rCornishFisher} simulate observations based on +#' Cornish-Fisher quantile expansion given mean, standard +#' deviation, skewness and excess kurtosis. +#' \item \code{dCornishFisher} Computes Cornish-Fisher density +#' from two term Edgeworth expansion given mean, standard +#' deviation, skewness and excess kurtosis. +#' \item \code{pCornishFisher} Computes Cornish-Fisher CDF from +#' two term Edgeworth expansion given mean, standard +#' deviation, skewness and excess kurtosis. +#' \item \code{qCornishFisher} Computes Cornish-Fisher quantiles +#' from two term Edgeworth expansion given mean, standard +#' deviation, skewness and excess kurtosis. +#'} +#' +#'@param n scalar, number of simulated values in rCornishFisher. Sample length in +#' density,distribution,quantile function. +#' @param sigma scalar, standard deviation. +#' @param skew scalar, skewness. +#' @param ekurt scalar, excess kurtosis. +#' @param seed set seed here. Default is \code{NULL}. +#' @param x,q vector of standardized quantiles. See detail. +#' @param p vector of probabilities. +#' +#' @return n simulated values from Cornish-Fisher distribution. +#' @author Eric Zivot and Yi-An Chen. +#' @references +#' \itemize{ +#' \item A.DasGupta, "Asymptotic Theory of Statistics and +#' Probability", Springer Science+Business Media,LLC 2008 +#' \item Thomas A.Severini, "Likelihood Methods in Statistics", +#' Oxford University Press, 2000 +#' } +#' @export +#' +#' @details CDF(q) = Pr(sqrt(n)*(x_bar-mu)/sigma < q) +#' +#' @examples +#' # generate 1000 observation from Cornish-Fisher distribution +#' rc <- rCornishFisher(1000,1,0,5) +#'hist(rc,breaks=100,freq=FALSE,main="simulation of Cornish Fisher Distribution", +#' xlim=c(-10,10)) +#'lines(seq(-10,10,0.1),dnorm(seq(-10,10,0.1),mean=0,sd=1),col=2) +#' # compare with standard normal curve +#' +#' # example from A.dasGupta p.188 exponential example +#' # x is iid exp(1) distribution, sample size = 5 +#' # then x_bar is Gamma(shape=5,scale=1/5) distribution +#' q <- c(0,0.4,1,2) +#' # exact cdf +#' pgamma(q/sqrt(5)+1,shape=5,scale=1/5) +#' # use CLT +#' pnorm(q) +#' # use edgeworth expansion +#' pCornishFisher(q,n=5,skew=2,ekurt=6) +#' + + + +rCornishFisher <- +function(n, sigma, skew, ekurt, seed=NULL) { +## inputs: +## n scalar, number of simulated values +## sigma scalar, standard deviation +## skew scalar, skewness +## ekurt scalar, excess kurtosis +## outputs: +## n simulated values from Cornish-Fisher distribution +if (!is.null(seed)) set.seed(seed) +zc = rnorm(n) +z.cf = zc + (((zc^2 - 1) * skew)/6) + (((zc^3 - 3 * zc) * + ekurt)/24) - ((((2 * zc^3) - 5 * zc) * skew^2)/36) +ans = sigma*z.cf +ans +} Added: pkg/FactorAnalytics/data/CommonFactors.RData =================================================================== (Binary files differ) Property changes on: pkg/FactorAnalytics/data/CommonFactors.RData ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/FactorAnalytics/man/CommonFactors.Rd =================================================================== --- pkg/FactorAnalytics/man/CommonFactors.Rd (rev 0) +++ pkg/FactorAnalytics/man/CommonFactors.Rd 2013-08-02 21:34:21 UTC (rev 2705) @@ -0,0 +1,41 @@ +\name{CommonFactors} +\alias{CommonFactors} +\alias{factors} +\alias{factors.Q} +\docType{data} +\title{ +Factor set of several commonly used factors +} +\description{ +10 monthly and quarterly common factors xts data from 1997-01-31 to 2013-07-31. +\itemize{ +\item SP500 is SP500 returns from FRED, +\item GS10TR US Treasury 10y yields total returns from the yeild of the 10 year constant maturity from FRED, +\item USD.Index Trade Weighted U.S. Dollar Index: Major Currencies - TWEXMMTH. from FRED +\item Term.Spread Yield spread of Merrill Lynch High-Yield Corporate Master II Index minus 10-year Treasury. from FRED +\item TED.Spread 3-Month Treasury Bill: Secondary Market Rate(TB3MS) - 3-Month Eurodollar Deposit Rate (London) (MED)3. from FRED. +\item DJUBSTR DJUBS Commodities index. +\item dVIX the first difference of the end-of-month value of the CBOE Volatility Index (VIX). +\item OILPRICE ""OILPRICE" from FRED. +\item TB3MS 3-Month Treasury Bill: Secondary Market Rate(TB3MS) from FRED +} + +} +\usage{data(CommonFactors)} +\format{ + A data frame with 0 observations on the following 2 variables. + \describe{ + \item{\code{x}}{a numeric vector} + \item{\code{y}}{a numeric vector} + } +} +\source{ +\itemize{ +\item FRED +\item http://www.djindexes.com/mdsidx/downloads/xlspages/ubsci_public/DJUBS_full_hist.xls +\item http://www.cboe.com/publish/ScheduledTask/MktData/datahouse/vixarchive.xls +} +} + + + Added: pkg/FactorAnalytics/man/rCornishFisher.Rd =================================================================== --- pkg/FactorAnalytics/man/rCornishFisher.Rd (rev 0) +++ pkg/FactorAnalytics/man/rCornishFisher.Rd 2013-08-02 21:34:21 UTC (rev 2705) @@ -0,0 +1,77 @@ +\name{rCornishFisher} +\alias{dCornishFisher} +\alias{pCornishFisher} +\alias{qCornishFisher} +\alias{rCornishFisher} +\title{Functions for Cornish-Fisher density, CDF, random number simulation and +quantile.} +\usage{ + rCornishFisher(n, sigma, skew, ekurt, seed = NULL) +} +\arguments{ + \item{n}{scalar, number of simulated values in + rCornishFisher. Sample length in + density,distribution,quantile function.} + + \item{sigma}{scalar, standard deviation.} + + \item{skew}{scalar, skewness.} + + \item{ekurt}{scalar, excess kurtosis.} + + \item{seed}{set seed here. Default is \code{NULL}.} + + \item{x,q}{vector of standardized quantiles. See detail.} + + \item{p}{vector of probabilities.} +} +\value{ + n simulated values from Cornish-Fisher distribution. +} +\description{ + \itemize{ \item \code{rCornishFisher} simulate + observations based on Cornish-Fisher quantile expansion + given mean, standard deviation, skewness and excess + kurtosis. \item \code{dCornishFisher} Computes + Cornish-Fisher density from two term Edgeworth expansion + given mean, standard deviation, skewness and excess + kurtosis. \item \code{pCornishFisher} Computes + Cornish-Fisher CDF from two term Edgeworth expansion + given mean, standard deviation, skewness and excess + kurtosis. \item \code{qCornishFisher} Computes + Cornish-Fisher quantiles from two term Edgeworth + expansion given mean, standard deviation, skewness and + excess kurtosis. } +} +\details{ + CDF(q) = Pr(sqrt(n)*(x_bar-mu)/sigma < q) +} +\examples{ +# generate 1000 observation from Cornish-Fisher distribution +rc <- rCornishFisher(1000,1,0,5) +hist(rc,breaks=100,freq=FALSE,main="simulation of Cornish Fisher Distribution", + xlim=c(-10,10)) +lines(seq(-10,10,0.1),dnorm(seq(-10,10,0.1),mean=0,sd=1),col=2) +# compare with standard normal curve + +# example from A.dasGupta p.188 exponential example +# x is iid exp(1) distribution, sample size = 5 +# then x_bar is Gamma(shape=5,scale=1/5) distribution +q <- c(0,0.4,1,2) +# exact cdf +pgamma(q/sqrt(5)+1,shape=5,scale=1/5) +# use CLT +pnorm(q) +# use edgeworth expansion +pCornishFisher(q,n=5,skew=2,ekurt=6) +} +\author{ + Eric Zivot and Yi-An Chen. +} +\references{ + \itemize{ \item A.DasGupta, "Asymptotic Theory of + Statistics and Probability", Springer Science+Business + Media,LLC 2008 \item Thomas A.Severini, "Likelihood + Methods in Statistics", Oxford University Press, 2000 } +} + From noreply at r-forge.r-project.org Sat Aug 3 00:10:01 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 00:10:01 +0200 (CEST) Subject: [Returnanalytics-commits] r2706 - in pkg/FactorAnalytics: R man Message-ID: <20130802221001.9126E184A10@r-forge.r-project.org> Author: chenyian Date: 2013-08-03 00:10:01 +0200 (Sat, 03 Aug 2013) New Revision: 2706 Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/R/factorModelVaRDecomposition.R pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd Log: convert VaR.method to PerformanceAnalytics VaR arguments Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 21:34:21 UTC (rev 2705) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 22:10:01 UTC (rev 2706) @@ -20,6 +20,10 @@ #' @param sig2.e scalar, residual variance from factor model. #' @param tail.prob scalar, tail probability for VaR quantile. Typically 0.01 #' or 0.05. +#' @param VaR.method character, method for computing VaR. Valid choices are +#' one of "modified","gaussian","historical", "kernel". computation is done with the \code{VaR} +#' in the PerformanceAnalytics package. +#' package. #' @return A list with the following components: #' @returnItem VaR Scalar, nonparametric VaR value for fund reported as a #' positive number. @@ -44,18 +48,16 @@ #' @examples #' #' data(managers.df) -#' ret.assets = managers.df[,(1:6)] -#' factors = managers.df[,(7:9)] -#' # fit the factor model with OLS -#' fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", -#' variable.selection="all subsets",factor.set=3) +#' fit.macro <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +#' factors.names=c("EDHEC.LS.EQ","SP500.TR"), +#' data=managers.df,fit.method="OLS") #' # risk factor contribution to ETL #' # combine fund returns, factor returns and residual returns for HAM1 -#' tmpData = cbind(ret.assets[,1], factors, -#' residuals(fit$asset.fit$HAM1)/sqrt(fit$residVars.vec[1])) -#' colnames(tmpData)[c(1,5)] = c("HAM1", "residual") -#' factor.es.decomp.HAM1 = factorModelEsDecomposition(tmpData, fit$beta.mat[1,], -#' fit$residVars.vec[1], tail.prob=0.05) +#' tmpData = cbind(managers.df[,1],managers.df[,c("EDHEC.LS.EQ","SP500.TR")] , +#' residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) +#' colnames(tmpData)[c(1,4)] = c("HAM1", "residual") +#' factor.es.decomp.HAM1 = factorModelEsDecomposition(tmpData, fit.macro$beta[1,], +#' fit.macro$resid.variance[1], tail.prob=0.05) #' #' # fundamental factor model #' # try to find factor contribution to ES for STI @@ -66,47 +68,15 @@ #' colnames(tmpData)[c(1,length(tmpData[1,]))] = c("STI", "residual") #' factorModelEsDecomposition(tmpData, #' fit.fund$beta["STI",], -#' fit.fund$resid.variance["STI"], tail.prob=0.05) +#' fit.fund$resid.variance["STI"], tail.prob=0.05,VaR.method = "HS) #' #' @export #' factorModelEsDecomposition <- -function(Data, beta.vec, sig2.e, tail.prob = 0.05) { -## Compute factor model factor ES decomposition based on Euler's theorem given historic -## or simulated data and factor model parameters. -## The partial derivative of ES wrt factor beta is computed -## as the expected factor return given fund return is less than or equal to its VaR -## VaR is compute either as the sample quantile or as an estimated quantile -## using the Cornish-Fisher expansion -## inputs: -## Data B x (k+2) matrix of data. First column contains the fund returns, -## second through k+1 columns contain factor returns, (k+2)nd column contain residuals -## scaled to have variance 1. -## beta.vec k x 1 vector of factor betas -## sig2.e scalar, residual variance from factor model -## tail.prob scalar tail probability -## output: -## A list with the following components: -## VaR scalar, nonparametric VaR value for fund reported as a positive number -## n.exceed scalar, number of observations beyond VaR -## idx.exceed n.exceed x 1 vector giving index values of exceedences -## ES scalar, nonparametric ES value for fund reported as a positive number -## mcES k+1 x 1 vector of factor marginal contributions to ES -## cES k+1 x 1 vector of factor component contributions to ES -## pcES k+1 x 1 vector of factor percent contributions to ES -## Remarks: -## The factor model has the form -## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) -## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' -## By Euler's theorem -## ES.fm = sum(cES.fm) = sum(beta.star*mcES.fm) -## References: -## 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A General Analysis", -## The Journal of Risk 5/2. -## 2. Yamai and Yoshiba (2002). "Comparative Analyses of Expected Shortfall and -## Value-at-Risk: Their Estimation Error, Decomposition, and Optimization -## Bank of Japan. -## 3. Meucci (2007). "Risk Contributions from Generic User-Defined Factors," Risk. +function(Data, beta.vec, sig2.e, tail.prob = 0.05, + VaR.method=c("modified", "gaussian", "historical", "kernel")) { + + require(PerformanceAnalytics) Data = as.matrix(Data) ncol.Data = ncol(Data) if(is.matrix(beta.vec)) { @@ -120,8 +90,13 @@ beta.star.vec = c(beta.vec, sqrt(sig2.e)) names(beta.star.vec) = beta.names - VaR.fm = quantile(Data[, 1], prob=tail.prob) - idx = which(Data[, 1] <= VaR.fm) + ## epsilon is calculated in the sense of minimizing mean square error by Silverman 1986 + epi <- 2.575*sd(Data[,1]) * (nrow(Data)^(-1/5)) + VaR.fm = as.numeric(VaR(Data[, 1], p=(1-tail.prob),method=VaR.method)) + idx = which(Data[, 1] <= VaR.fm + epi & Data[,1] >= VaR.fm - epi) + + + ES.fm = -mean(Data[idx, 1]) ## Modified: pkg/FactorAnalytics/R/factorModelVaRDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-02 21:34:21 UTC (rev 2705) +++ pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-02 22:10:01 UTC (rev 2706) @@ -11,17 +11,15 @@ #' where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' By Euler's #' theorem VaR.fm = sum(cVaR.fm) = sum(beta.star*mVaR.fm) #' -#' @param bootData B x (k+2) matrix of bootstrap data. First column contains +#' @param Data B x (k+2) matrix of bootstrap data. First column contains #' the fund returns, second through k+1 columns contain factor returns, (k+2)nd #' column contain residuals scaled to have unit variance . #' @param beta.vec k x 1 vector of factor betas. #' @param sig2.e scalar, residual variance from factor model. #' @param tail.prob scalar, tail probability #' @param VaR.method character, method for computing VaR. Valid choices are -#' "HS" for historical simulation (empirical quantile); "CornishFisher" for -#' modified VaR based on Cornish-Fisher quantile estimate. Cornish-Fisher -#' computation is done with the VaR.CornishFisher in the PerformanceAnalytics -#' package. +#' one of "modified","gaussian","historical", "kernel". computation is done with the \code{VaR} +#' in the PerformanceAnalytics package. #' @return an S3 object containing #' @returnItem VaR.fm Scalar, bootstrap VaR value for fund reported as a #' positive number. @@ -44,77 +42,27 @@ #' @examples #' #' data(managers.df) -#' ret.assets = managers.df[,(1:6)] -#' factors = managers.df[,(7:9)] -#' # fit the factor model with OLS -#' fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", -#' variable.selection="all subsets", -#' factor.set = 3) -#' -#' residualData=as.matrix(fit$residVars.vec,1,6) -#' bootData <- factorModelMonteCarlo(n.boot=100, factors ,fit$beta.mat, residual.dist="normal", -#' residualData, Alpha.mat=NULL, boot.method="random", -#' seed = 123, return.factors = "TRUE", return.residuals = "TRUE") -#' -#' # compute risk factor contribution to VaR using bootstrap data +#' fit.macro <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +#' factors.names=c("EDHEC.LS.EQ","SP500.TR"), +#' data=managers.df,fit.method="OLS") +#' # risk factor contribution to ETL #' # combine fund returns, factor returns and residual returns for HAM1 -#' tmpData = cbind(bootData$returns[,1], bootData$factors, -#' bootData$residuals[,1]/sqrt(fit$residVars.vec[1])) -#' colnames(tmpData)[c(1,5)] = c("HAM1", "residual") -#' factor.VaR.decomp.HAM1 <- factorModelVaRDecomposition(tmpData, fit$beta.mat[1,], -#' fit$residVars.vec[1], tail.prob=0.05,VaR.method="HS") +#' tmpData = cbind(managers.df[,1],managers.df[,c("EDHEC.LS.EQ","SP500.TR")] , +#' residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) +#' colnames(tmpData)[c(1,4)] = c("HAM1", "residual") +#' factor.VaR.decomp.HAM1 = factorModelVaRDecomposition(tmpData, fit.macro$beta[1,], +#' fit.macro$resid.variance[1], tail.prob=0.05, +#' VaR.method="historical) #' #' @export factorModelVaRDecomposition <- -function(bootData, beta.vec, sig2.e, tail.prob = 0.01, - VaR.method=c("HS", "CornishFisher")) { -## Compute factor model factor VaR decomposition based on Euler's theorem given historic -## or simulated data and factor model parameters. -## The partial derivative of VaR wrt factor beta is computed -## as the expected factor return given fund return is equal to its VaR and approximated by kernel estimator. -## VaR is compute either as the sample quantile or as an estimated quantile -## using the Cornish-Fisher expansion. -## inputs: -## bootData B x (k+2) matrix of bootstrap data. First column contains the fund returns, -## second through k+1 columns contain factor returns, (k+2)nd column contain residuals -## scaled to have variance 1. -## beta.vec k x 1 vector of factor betas -## sig2.e scalar, residual variance from factor model -## tail.prob scalar tail probability -## method character, method for computing marginal ES. Valid choices are -## "average" for approximating E[Fj | R=VaR] -## VaR.method character, method for computing VaR. Valid choices are "HS" for -## historical simulation (empirical quantile); "CornishFisher" for -## modified VaR based on Cornish-Fisher quantile estimate. Cornish-Fisher -## computation is done with the VaR.CornishFisher in the PerformanceAnalytics -## package -## output: -## A list with the following components: -## VaR.fm scalar, bootstrap VaR value for fund reported as a positive number -## n.exceed scalar, number of observations beyond VaR -## idx.exceed n.exceed x 1 vector giving index values of exceedences -## mcES.fm k+1 x 1 vector of factor marginal contributions to ES -## cES.fm k+1 x 1 vector of factor component contributions to ES -## pcES.fm k+1 x 1 vector of factor percent contributions to ES -## Remarks: -## The factor model has the form -## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) -## where beta.star = (beta, sig.e)' and F.star(t) = (F(t)', z(t))' -## By Euler's theorem -## ES.fm = sum(cES.fm) = sum(beta.star*mcES.fm) -## References: -## 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A General Analysis", -## The Journal of Risk 5/2. -## 2. Yamai and Yoshiba (2002). "Comparative Analyses of Expected Shortfall and -## Value-at-Risk: Their Estimation Error, Decomposition, and Optimization -## Bank of Japan. -## 3. Meucci (2007). "Risk Contributions from Generic User-Defined Factors," Risk. +function(Data, beta.vec, sig2.e, tail.prob = 0.01, + VaR.method=c("modified", "gaussian", "historical", "kernel")) { - require(PerformanceAnalytics) VaR.method = VaR.method[1] - bootData = as.matrix(bootData) - ncol.bootData = ncol(bootData) + Data = as.matrix(Data) + ncol.Data = ncol(Data) if(is.matrix(beta.vec)) { beta.names = c(rownames(beta.vec), "residual") } else if(is.vector(beta.vec)) { @@ -127,20 +75,16 @@ names(beta.star.vec) = beta.names ## epsilon is calculated in the sense of minimizing mean square error by Silverman 1986 - epi <- 2.575*sd(bootData[,1]) * (nrow(bootData)^(-1/5)) - if (VaR.method == "HS") { - VaR.fm = quantile(bootData[, 1], prob=tail.prob) - idx = which(bootData[, 1] <= VaR.fm + epi & bootData[,1] >= VaR.fm - epi) - } else { - VaR.fm = as.numeric(VaR(bootData[, 1], p=(1-tail.prob),method="modified")) - idx = which(bootData[, 1] <= VaR.fm + epi & bootData[,1] >= VaR.fm - epi) - } + epi <- 2.575*sd(Data[,1]) * (nrow(Data)^(-1/5)) + VaR.fm = as.numeric(VaR(Data[, 1], p=(1-tail.prob),method=VaR.method)) + idx = which(Data[, 1] <= VaR.fm + epi & Data[,1] >= VaR.fm - epi) + ## ## compute marginal contribution to VaR ## ## compute marginal VaR as expected value of factor return given ## triangler kernel - mVaR.fm = -as.matrix(colMeans(bootData[idx, -1])) + mVaR.fm = -as.matrix(colMeans(Data[idx, -1])) ## compute correction factor so that sum of weighted marginal VaR adds to portfolio VaR cf = as.numeric( -VaR.fm / sum(mVaR.fm*beta.star.vec) ) Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 21:34:21 UTC (rev 2705) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 22:10:01 UTC (rev 2706) @@ -3,7 +3,8 @@ \title{Compute Factor Model Factor ES Decomposition} \usage{ factorModelEsDecomposition(Data, beta.vec, sig2.e, - tail.prob = 0.05) + tail.prob = 0.05, + VaR.method = c("HS", "CornishFisher")) } \arguments{ \item{Data}{\code{B x (k+2)} matrix of historic or @@ -20,6 +21,13 @@ \item{tail.prob}{scalar, tail probability for VaR quantile. Typically 0.01 or 0.05.} + + \item{VaR.method}{character, method for computing VaR. + Valid choices are "HS" for historical simulation + (empirical quantile); "CornishFisher" for modified VaR + based on Cornish-Fisher quantile estimate. Cornish-Fisher + computation is done with the VaR.CornishFisher in the + PerformanceAnalytics package.} } \value{ A list with the following components: @@ -43,18 +51,16 @@ } \examples{ data(managers.df) -ret.assets = managers.df[,(1:6)] -factors = managers.df[,(7:9)] -# fit the factor model with OLS -fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", - variable.selection="all subsets",factor.set=3) +fit.macro <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), + factors.names=c("EDHEC.LS.EQ","SP500.TR"), + data=managers.df,fit.method="OLS") # risk factor contribution to ETL # combine fund returns, factor returns and residual returns for HAM1 -tmpData = cbind(ret.assets[,1], factors, - residuals(fit$asset.fit$HAM1)/sqrt(fit$residVars.vec[1])) -colnames(tmpData)[c(1,5)] = c("HAM1", "residual") -factor.es.decomp.HAM1 = factorModelEsDecomposition(tmpData, fit$beta.mat[1,], - fit$residVars.vec[1], tail.prob=0.05) +tmpData = cbind(managers.df[,1],managers.df[,c("EDHEC.LS.EQ","SP500.TR")] , +residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) +colnames(tmpData)[c(1,4)] = c("HAM1", "residual") +factor.es.decomp.HAM1 = factorModelEsDecomposition(tmpData, fit.macro$beta[1,], + fit.macro$resid.variance[1], tail.prob=0.05) # fundamental factor model # try to find factor contribution to ES for STI @@ -65,7 +71,7 @@ colnames(tmpData)[c(1,length(tmpData[1,]))] = c("STI", "residual") factorModelEsDecomposition(tmpData, fit.fund$beta["STI",], - fit.fund$resid.variance["STI"], tail.prob=0.05) + fit.fund$resid.variance["STI"], tail.prob=0.05,VaR.method = "HS) } \author{ Eric Zviot and Yi-An Chen. Modified: pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-02 21:34:21 UTC (rev 2705) +++ pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-02 22:10:01 UTC (rev 2706) @@ -1,83 +1,74 @@ -\name{factorModelVaRDecomposition} -\alias{factorModelVaRDecomposition} -\title{Compute factor model factor VaR decomposition} -\usage{ - factorModelVaRDecomposition(bootData, beta.vec, sig2.e, - tail.prob = 0.01, - VaR.method = c("HS", "CornishFisher")) -} -\arguments{ - \item{bootData}{B x (k+2) matrix of bootstrap data. First - column contains the fund returns, second through k+1 - columns contain factor returns, (k+2)nd column contain - residuals scaled to have unit variance .} - - \item{beta.vec}{k x 1 vector of factor betas.} - - \item{sig2.e}{scalar, residual variance from factor - model.} - - \item{tail.prob}{scalar, tail probability} - - \item{VaR.method}{character, method for computing VaR. - Valid choices are "HS" for historical simulation - (empirical quantile); "CornishFisher" for modified VaR - based on Cornish-Fisher quantile estimate. Cornish-Fisher - computation is done with the VaR.CornishFisher in the - PerformanceAnalytics package.} -} -\value{ - an S3 object containing -} -\description{ - Compute factor model factor VaR decomposition based on - Euler's theorem given historic or simulated data and - factor model parameters. The partial derivative of VaR - wrt factor beta is computed as the expected factor return - given fund return is equal to its VaR and approximated by - kernel estimator. VaR is compute either as the sample - quantile or as an estimated quantile using the - Cornish-Fisher expansion. -} -\details{ - The factor model has the form R(t) = beta'F(t) + e(t) = - beta.star'F.star(t) where beta.star = (beta, sig.e)' and - F.star(t) = (F(t)', z(t))' By Euler's theorem VaR.fm = - sum(cVaR.fm) = sum(beta.star*mVaR.fm) -} -\examples{ -data(managers.df) -ret.assets = managers.df[,(1:6)] -factors = managers.df[,(7:9)] -# fit the factor model with OLS -fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", - variable.selection="all subsets", - factor.set = 3) - -residualData=as.matrix(fit$residVars.vec,1,6) -bootData <- factorModelMonteCarlo(n.boot=100, factors ,fit$beta.mat, residual.dist="normal", - residualData, Alpha.mat=NULL, boot.method="random", - seed = 123, return.factors = "TRUE", return.residuals = "TRUE") - -# compute risk factor contribution to VaR using bootstrap data -# combine fund returns, factor returns and residual returns for HAM1 -tmpData = cbind(bootData$returns[,1], bootData$factors, - bootData$residuals[,1]/sqrt(fit$residVars.vec[1])) -colnames(tmpData)[c(1,5)] = c("HAM1", "residual") -factor.VaR.decomp.HAM1 <- factorModelVaRDecomposition(tmpData, fit$beta.mat[1,], - fit$residVars.vec[1], tail.prob=0.05,VaR.method="HS") -} -\author{ - Eric Zivot and Yi-An Chen -} -\references{ - 1. Hallerback (2003), "Decomposing Portfolio - Value-at-Risk: A General Analysis", The Journal of Risk - 5/2. 2. Yamai and Yoshiba (2002). "Comparative Analyses - of Expected Shortfall and Value-at-Risk: Their Estimation - Error, Decomposition, and Optimization Bank of Japan. 3. - Meucci (2007). "Risk Contributions from Generic - User-Defined Factors," Risk. 4. Epperlein and Smillie - (2006) "Cracking VAR with Kernels," Risk. -} - +\name{factorModelVaRDecomposition} +\alias{factorModelVaRDecomposition} +\title{Compute factor model factor VaR decomposition} +\usage{ + factorModelVaRDecomposition(bootData, beta.vec, sig2.e, + tail.prob = 0.01, + VaR.method = c("HS", "CornishFisher")) +} +\arguments{ + \item{bootData}{B x (k+2) matrix of bootstrap data. First + column contains the fund returns, second through k+1 + columns contain factor returns, (k+2)nd column contain + residuals scaled to have unit variance .} + + \item{beta.vec}{k x 1 vector of factor betas.} + + \item{sig2.e}{scalar, residual variance from factor + model.} + + \item{tail.prob}{scalar, tail probability} + + \item{VaR.method}{character, method for computing VaR. + Valid choices are "HS" for historical simulation + (empirical quantile); "CornishFisher" for modified VaR + based on Cornish-Fisher quantile estimate. Cornish-Fisher + computation is done with the VaR.CornishFisher in the + PerformanceAnalytics package.} +} +\value{ + an S3 object containing +} +\description{ + Compute factor model factor VaR decomposition based on + Euler's theorem given historic or simulated data and + factor model parameters. The partial derivative of VaR + wrt factor beta is computed as the expected factor return + given fund return is equal to its VaR and approximated by + kernel estimator. VaR is compute either as the sample + quantile or as an estimated quantile using the + Cornish-Fisher expansion. +} +\details{ + The factor model has the form R(t) = beta'F(t) + e(t) = + beta.star'F.star(t) where beta.star = (beta, sig.e)' and + F.star(t) = (F(t)', z(t))' By Euler's theorem VaR.fm = + sum(cVaR.fm) = sum(beta.star*mVaR.fm) +} +\examples{ +data(managers.df) +fit.macro <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), + factors.names=c("EDHEC.LS.EQ","SP500.TR"), + data=managers.df,fit.method="OLS") +# risk factor contribution to ETL +# combine fund returns, factor returns and residual returns for HAM1 +tmpData = cbind(managers.df[,1],managers.df[,c("EDHEC.LS.EQ","SP500.TR")] , +residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) +colnames(tmpData)[c(1,4)] = c("HAM1", "residual") +factor.VaR.decomp.HAM1 = factorModelEsDecomposition(tmpData, fit.macro$beta[1,], + fit.macro$resid.variance[1], tail.prob=0.05) +} +\author{ + Eric Zivot and Yi-An Chen +} +\references{ + 1. Hallerback (2003), "Decomposing Portfolio + Value-at-Risk: A General Analysis", The Journal of Risk + 5/2. 2. Yamai and Yoshiba (2002). "Comparative Analyses + of Expected Shortfall and Value-at-Risk: Their Estimation + Error, Decomposition, and Optimization Bank of Japan. 3. + Meucci (2007). "Risk Contributions from Generic + User-Defined Factors," Risk. 4. Epperlein and Smillie + (2006) "Cracking VAR with Kernels," Risk. +} + From noreply at r-forge.r-project.org Sat Aug 3 00:27:25 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 00:27:25 +0200 (CEST) Subject: [Returnanalytics-commits] r2707 - pkg/FactorAnalytics/R Message-ID: <20130802222725.A4C1B184A10@r-forge.r-project.org> Author: chenyian Date: 2013-08-03 00:27:25 +0200 (Sat, 03 Aug 2013) New Revision: 2707 Modified: pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r pkg/FactorAnalytics/R/plot.StatFactorModel.r pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r Log: modify plot method related to VaR and ES decomposition Modified: pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r 2013-08-02 22:10:01 UTC (rev 2706) +++ pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r 2013-08-02 22:27:25 UTC (rev 2707) @@ -34,6 +34,9 @@ #' 8 = histogram of residuals with normal curve overlayed, #' 9 = normal qq-plot of residuals. #' @param legend.txt Logical. TRUE will plot legend on barplot. Defualt is \code{TRUE}. +#' @param VaR.method haracter, method for computing VaR. Valid choices are +#' one of "modified","gaussian","historical", "kernel". computation is done with the \code{VaR} +#' in the PerformanceAnalytics package. Default is "historical". #' @param ... other variables for barplot method. #' @author Eric Zivot and Yi-An Chen. #' @examples @@ -62,7 +65,7 @@ function(x,which.plot=c("none","1L","2L","3L","4L","5L","6L"),max.show=4, plot.single=FALSE, asset.name, which.plot.single=c("none","1L","2L","3L","4L","5L","6L", - "7L","8L","9L"),legend.txt=TRUE,...) + "7L","8L","9L"),legend.txt=TRUE,VaR.method="historical",...) { require(ellipse) require(PerformanceAnalytics) @@ -229,7 +232,7 @@ factor.es.decomp.list[[i]] = factorModelEsDecomposition(tmpData, x$beta[i,], - x$resid.variance[i], tail.prob=0.05) + x$resid.variance[i], tail.prob=0.05,VaR.method=VaR.method) } # stacked bar charts of percent contributions to ES @@ -256,7 +259,7 @@ factor.VaR.decomp.list[[i]] = factorModelVaRDecomposition(tmpData, x$beta[i,], - x$resid.variance[i], tail.prob=0.05) + x$resid.variance[i], tail.prob=0.05,VaR.method=VaR.method) } Modified: pkg/FactorAnalytics/R/plot.StatFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.StatFactorModel.r 2013-08-02 22:10:01 UTC (rev 2706) +++ pkg/FactorAnalytics/R/plot.StatFactorModel.r 2013-08-02 22:27:25 UTC (rev 2707) @@ -33,6 +33,9 @@ #' plot of recursive estimates relative to full sample estimates 13= rolling #' estimates over 24 month window #' @param max.show Maximum assets to plot. Default is 6. +#' @param VaR.method haracter, method for computing VaR. Valid choices are +#' one of "modified","gaussian","historical", "kernel". computation is done with the \code{VaR} +#' in the PerformanceAnalytics package. Default is "historical". #' @param ... other variables for barplot method. #' @author Eric Zivot and Yi-An Chen. #' @examples @@ -58,7 +61,7 @@ hgrid = FALSE, vgrid = FALSE,plot.single=FALSE, asset.name, which.plot.single=c("none","1L","2L","3L","4L","5L","6L", "7L","8L","9L","10L","11L","12L","13L"), - max.show=6, ...) + max.show=6, VaR.method = "historical",...) { require(strucchange) require(ellipse) @@ -412,7 +415,7 @@ factor.es.decomp.list[[i]] = factorModelEsDecomposition(tmpData, x$loadings[,i], - x$resid.variance[i], tail.prob=0.05) + x$resid.variance[i], tail.prob=0.05,VaR.method=VaR.method) } @@ -438,7 +441,7 @@ factor.VaR.decomp.list[[i]] = factorModelVaRDecomposition(tmpData, x$loadings[,i], - x$resid.variance[i], tail.prob=0.05) + x$resid.variance[i], tail.prob=0.05,VaR.method=VaR.method) } Modified: pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r 2013-08-02 22:10:01 UTC (rev 2706) +++ pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r 2013-08-02 22:27:25 UTC (rev 2707) @@ -25,6 +25,9 @@ #' CUSUM plot of recursive residuals 11= CUSUM plot of OLS residuals 12= CUSUM #' plot of recursive estimates relative to full sample estimates 13= rolling #' estimates over 24 month window +#' @param VaR.method haracter, method for computing VaR. Valid choices are +#' one of "modified","gaussian","historical", "kernel". computation is done with the \code{VaR} +#' in the PerformanceAnalytics package. Default is "historical". #' @author Eric Zivot and Yi-An Chen. #' @examples #' @@ -45,7 +48,8 @@ function(x,colorset=c(1:12),legend.loc=NULL, which.plot=c("none","1L","2L","3L","4L","5L","6L","7L"),max.show=6, plot.single=FALSE, asset.name,which.plot.single=c("none","1L","2L","3L","4L","5L","6L", - "7L","8L","9L","10L","11L","12L","13L")) { + "7L","8L","9L","10L","11L","12L","13L"), + VaR.method = "historical") { require(zoo) require(PerformanceAnalytics) require(strucchange) @@ -436,7 +440,7 @@ factor.VaR.decomp.list[[i]] = factorModelVaRDecomposition(tmpData, x$beta[i,], - x$resid.variance[i], tail.prob=0.05) + x$resid.variance[i], tail.prob=0.05,VaR.method=VaR.method) } } else { @@ -450,7 +454,7 @@ factorModelVaRDecomposition(tmpData, x$beta[i,], x$resid.variance[i], tail.prob=0.05, - VaR.method="HS") + VaR.method=VaR.method) } } From noreply at r-forge.r-project.org Sat Aug 3 00:32:44 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 00:32:44 +0200 (CEST) Subject: [Returnanalytics-commits] r2708 - in pkg/FactorAnalytics: R man Message-ID: <20130802223244.81985184A10@r-forge.r-project.org> Author: chenyian Date: 2013-08-03 00:32:44 +0200 (Sat, 03 Aug 2013) New Revision: 2708 Modified: pkg/FactorAnalytics/R/factorModelSdDecomposition.R pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/plot.StatFactorModel.Rd pkg/FactorAnalytics/man/plot.TimeSeriesFactorModel.Rd Log: modify .Rd file for the change of VaR.method Modified: pkg/FactorAnalytics/R/factorModelSdDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelSdDecomposition.R 2013-08-02 22:27:25 UTC (rev 2707) +++ pkg/FactorAnalytics/R/factorModelSdDecomposition.R 2013-08-02 22:32:44 UTC (rev 2708) @@ -19,34 +19,22 @@ #' @examples #' #' # load data from the database -#' data(managers.df) -#' ret.assets = managers.df[,(1:6)] -#' factors = managers.df[,(7:9)] -#' # fit the factor model with OLS -#' fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", -#' variable.selection="all subsets", -#' factor.set = 3) -#' # factor SD decomposition for HAM1 -#' cov.factors = var(factors) -#' manager.names = colnames(managers.df[,(1:6)]) -#' factor.names = colnames(managers.df[,(7:9)]) -#' factor.sd.decomp.HAM1 = factorModelSdDecomposition(fit$beta.mat["HAM1",], -#' cov.factors, fit$residVars.vec["HAM1"]) +#' data("stat.fm.data") +#' fit.stat <- fitStatisticalFactorModel(sfm.dat,k=2) +#' cov.factors = var(fit.stat$factors) +#' names = colnames(fit.stat$asset.ret) +#' factor.sd.decomp.list = list() +#' for (i in names) { +#' factor.sd.decomp.list[[i]] = +#' factorModelSdDecomposition(fit.stat$loadings[,i], +#' cov.factors, fit.stat$resid.variance[i]) +#' } #' #' @export #' factorModelSdDecomposition <- function(beta.vec, factor.cov, sig2.e) { -## Inputs: -## beta k x 1 vector of factor betas with factor names in the rownames -## factor.cov k x k factor excess return covariance matrix -## sig2.e scalar, residual variance from factor model (residVars.vec in fitFundamentalFactorModel) -## Output: -## A list with the following components: -## sd.fm scalar, std dev based on factor model -## mcr.fm k+1 x 1 vector of factor marginal contributions to risk (sd) -## cr.fm k+1 x 1 vector of factor component contributions to risk (sd) -## pcr.fm k+1 x 1 vector of factor percent contributions to risk (sd) + ## Remarks: ## The factor model has the form ## R(t) = beta'F(t) + e(t) = beta.star'F.star(t) Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 22:27:25 UTC (rev 2707) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 22:32:44 UTC (rev 2708) @@ -4,7 +4,7 @@ \usage{ factorModelEsDecomposition(Data, beta.vec, sig2.e, tail.prob = 0.05, - VaR.method = c("HS", "CornishFisher")) + VaR.method = c("modified", "gaussian", "historical", "kernel")) } \arguments{ \item{Data}{\code{B x (k+2)} matrix of historic or @@ -23,11 +23,10 @@ quantile. Typically 0.01 or 0.05.} \item{VaR.method}{character, method for computing VaR. - Valid choices are "HS" for historical simulation - (empirical quantile); "CornishFisher" for modified VaR - based on Cornish-Fisher quantile estimate. Cornish-Fisher - computation is done with the VaR.CornishFisher in the - PerformanceAnalytics package.} + Valid choices are one of + "modified","gaussian","historical", "kernel". computation + is done with the \code{VaR} in the PerformanceAnalytics + package. package.} } \value{ A list with the following components: Modified: pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd 2013-08-02 22:27:25 UTC (rev 2707) +++ pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd 2013-08-02 22:32:44 UTC (rev 2708) @@ -1,43 +1,40 @@ -\name{factorModelSdDecomposition} -\alias{factorModelSdDecomposition} -\title{Compute factor model factor risk (sd) decomposition for individual fund.} -\usage{ - factorModelSdDecomposition(beta.vec, factor.cov, sig2.e) -} -\arguments{ - \item{beta.vec}{k x 1 vector of factor betas with factor - names in the rownames.} - - \item{factor.cov}{k x k factor excess return covariance - matrix.} - - \item{sig2.e}{scalar, residual variance from factor - model.} -} -\value{ - an S3 object containing -} -\description{ - Compute factor model factor risk (sd) decomposition for - individual fund. -} -\examples{ -# load data from the database -data(managers.df) -ret.assets = managers.df[,(1:6)] -factors = managers.df[,(7:9)] -# fit the factor model with OLS -fit <- fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", - variable.selection="all subsets", - factor.set = 3) -# factor SD decomposition for HAM1 -cov.factors = var(factors) -manager.names = colnames(managers.df[,(1:6)]) -factor.names = colnames(managers.df[,(7:9)]) -factor.sd.decomp.HAM1 = factorModelSdDecomposition(fit$beta.mat["HAM1",], - cov.factors, fit$residVars.vec["HAM1"]) -} -\author{ - Eric Zivot and Yi-An Chen -} - +\name{factorModelSdDecomposition} +\alias{factorModelSdDecomposition} +\title{Compute factor model factor risk (sd) decomposition for individual fund.} +\usage{ + factorModelSdDecomposition(beta.vec, factor.cov, sig2.e) +} +\arguments{ + \item{beta.vec}{k x 1 vector of factor betas with factor + names in the rownames.} + + \item{factor.cov}{k x k factor excess return covariance + matrix.} + + \item{sig2.e}{scalar, residual variance from factor + model.} +} +\value{ + an S3 object containing +} +\description{ + Compute factor model factor risk (sd) decomposition for + individual fund. +} +\examples{ +# load data from the database +data("stat.fm.data") +fit.stat <- fitStatisticalFactorModel(sfm.dat,k=2) +cov.factors = var(fit.stat$factors) +names = colnames(fit.stat$asset.ret) +factor.sd.decomp.list = list() +for (i in names) { + factor.sd.decomp.list[[i]] = + factorModelSdDecomposition(fit.stat$loadings[,i], + cov.factors, fit.stat$resid.variance[i]) +} +} +\author{ + Eric Zivot and Yi-An Chen +} + Modified: pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-02 22:27:25 UTC (rev 2707) +++ pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-02 22:32:44 UTC (rev 2708) @@ -2,12 +2,12 @@ \alias{factorModelVaRDecomposition} \title{Compute factor model factor VaR decomposition} \usage{ - factorModelVaRDecomposition(bootData, beta.vec, sig2.e, + factorModelVaRDecomposition(Data, beta.vec, sig2.e, tail.prob = 0.01, - VaR.method = c("HS", "CornishFisher")) + VaR.method = c("modified", "gaussian", "historical", "kernel")) } \arguments{ - \item{bootData}{B x (k+2) matrix of bootstrap data. First + \item{Data}{B x (k+2) matrix of bootstrap data. First column contains the fund returns, second through k+1 columns contain factor returns, (k+2)nd column contain residuals scaled to have unit variance .} @@ -20,11 +20,10 @@ \item{tail.prob}{scalar, tail probability} \item{VaR.method}{character, method for computing VaR. - Valid choices are "HS" for historical simulation - (empirical quantile); "CornishFisher" for modified VaR - based on Cornish-Fisher quantile estimate. Cornish-Fisher - computation is done with the VaR.CornishFisher in the - PerformanceAnalytics package.} + Valid choices are one of + "modified","gaussian","historical", "kernel". computation + is done with the \code{VaR} in the PerformanceAnalytics + package.} } \value{ an S3 object containing @@ -55,8 +54,9 @@ tmpData = cbind(managers.df[,1],managers.df[,c("EDHEC.LS.EQ","SP500.TR")] , residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) colnames(tmpData)[c(1,4)] = c("HAM1", "residual") -factor.VaR.decomp.HAM1 = factorModelEsDecomposition(tmpData, fit.macro$beta[1,], - fit.macro$resid.variance[1], tail.prob=0.05) +factor.VaR.decomp.HAM1 = factorModelVaRDecomposition(tmpData, fit.macro$beta[1,], + fit.macro$resid.variance[1], tail.prob=0.05, + VaR.method="historical) } \author{ Eric Zivot and Yi-An Chen Modified: pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd 2013-08-02 22:27:25 UTC (rev 2707) +++ pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd 2013-08-02 22:32:44 UTC (rev 2708) @@ -6,7 +6,7 @@ which.plot = c("none", "1L", "2L", "3L", "4L", "5L", "6L"), max.show = 4, plot.single = FALSE, asset.name, which.plot.single = c("none", "1L", "2L", "3L", "4L", "5L", "6L", "7L", "8L", "9L"), - legend.txt = TRUE, ...) + legend.txt = TRUE, VaR.method = "historical", ...) } \arguments{ \item{x}{fit object created by @@ -40,6 +40,12 @@ \item{legend.txt}{Logical. TRUE will plot legend on barplot. Defualt is \code{TRUE}.} + \item{VaR.method}{haracter, method for computing VaR. + Valid choices are one of + "modified","gaussian","historical", "kernel". computation + is done with the \code{VaR} in the PerformanceAnalytics + package. Default is "historical".} + \item{...}{other variables for barplot method.} } \description{ Modified: pkg/FactorAnalytics/man/plot.StatFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/plot.StatFactorModel.Rd 2013-08-02 22:27:25 UTC (rev 2707) +++ pkg/FactorAnalytics/man/plot.StatFactorModel.Rd 2013-08-02 22:32:44 UTC (rev 2708) @@ -8,7 +8,7 @@ hgrid = FALSE, vgrid = FALSE, plot.single = FALSE, asset.name, which.plot.single = c("none", "1L", "2L", "3L", "4L", "5L", "6L", "7L", "8L", "9L", "10L", "11L", "12L", "13L"), - max.show = 6, ...) + max.show = 6, VaR.method = "historical", ...) } \arguments{ \item{x}{fit object created by @@ -60,6 +60,12 @@ \item{max.show}{Maximum assets to plot. Default is 6.} + \item{VaR.method}{haracter, method for computing VaR. + Valid choices are one of + "modified","gaussian","historical", "kernel". computation + is done with the \code{VaR} in the PerformanceAnalytics + package. Default is "historical".} + \item{...}{other variables for barplot method.} } \description{ Modified: pkg/FactorAnalytics/man/plot.TimeSeriesFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/plot.TimeSeriesFactorModel.Rd 2013-08-02 22:27:25 UTC (rev 2707) +++ pkg/FactorAnalytics/man/plot.TimeSeriesFactorModel.Rd 2013-08-02 22:32:44 UTC (rev 2708) @@ -6,7 +6,8 @@ colorset = c(1:12), legend.loc = NULL, which.plot = c("none", "1L", "2L", "3L", "4L", "5L", "6L", "7L"), max.show = 6, plot.single = FALSE, asset.name, - which.plot.single = c("none", "1L", "2L", "3L", "4L", "5L", "6L", "7L", "8L", "9L", "10L", "11L", "12L", "13L")) + which.plot.single = c("none", "1L", "2L", "3L", "4L", "5L", "6L", "7L", "8L", "9L", "10L", "11L", "12L", "13L"), + VaR.method = "historical") } \arguments{ \item{x}{fit object created by fitTimeSeriesFactorModel.} @@ -43,6 +44,12 @@ OLS residuals 12= CUSUM plot of recursive estimates relative to full sample estimates 13= rolling estimates over 24 month window} + + \item{VaR.method}{haracter, method for computing VaR. + Valid choices are one of + "modified","gaussian","historical", "kernel". computation + is done with the \code{VaR} in the PerformanceAnalytics + package. Default is "historical".} } \description{ Generic function of plot method for From noreply at r-forge.r-project.org Sat Aug 3 00:47:15 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 00:47:15 +0200 (CEST) Subject: [Returnanalytics-commits] r2709 - in pkg/FactorAnalytics: R man Message-ID: <20130802224715.4B346184CDB@r-forge.r-project.org> Author: chenyian Date: 2013-08-03 00:47:14 +0200 (Sat, 03 Aug 2013) New Revision: 2709 Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/R/factorModelSdDecomposition.R pkg/FactorAnalytics/R/factorModelVaRDecomposition.R pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd Log: debug .Rd file Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 22:32:44 UTC (rev 2708) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 22:47:14 UTC (rev 2709) @@ -23,21 +23,19 @@ #' @param VaR.method character, method for computing VaR. Valid choices are #' one of "modified","gaussian","historical", "kernel". computation is done with the \code{VaR} #' in the PerformanceAnalytics package. -#' package. +#' +#' #' @return A list with the following components: -#' @returnItem VaR Scalar, nonparametric VaR value for fund reported as a -#' positive number. -#' @returnItem n.exceed Scalar, number of observations beyond VaR. -#' @returnItem idx.exceed \code{n.exceed x 1} vector giving index values of -#' exceedences. -#' @returnItem ES scalar, nonparametric ES value for fund reported as a -#' positive number. -#' @returnItem mcES \code{(K+1) x 1} vector of factor marginal contributions to -#' ES. -#' @returnItem cES \code{(K+1) x 1} vector of factor component contributions to -#' ES. -#' @returnItem pcES \code{(K+1) x 1} vector of factor percent contributions to -#' ES. +#' \itemize{ +#' \item{VaR} {Scalar, nonparametric VaR value for fund reported as a +#' positive number.} +#' \item{n.exceed} Scalar, number of observations beyond VaR. +#' \item{idx.exceed} \code{n.exceed x 1} vector giving index values of exceedences. +#' \item{ES scalar} nonparametric ES value for fund reported as a positive number. +#' \item{mcES} \code{(K+1) x 1} vector of factor marginal contributions to ES. +#' \item{cES} \code{(K+1) x 1} vector of factor component contributions to ES. +#' \item{pcES} \code{(K+1) x 1} vector of factor percent contributions to ES. +#' } #' @author Eric Zviot and Yi-An Chen. #' @references 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A #' General Analysis", \emph{The Journal of Risk} 5/2. \cr 2. Yamai and Yoshiba @@ -57,7 +55,8 @@ #' residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) #' colnames(tmpData)[c(1,4)] = c("HAM1", "residual") #' factor.es.decomp.HAM1 = factorModelEsDecomposition(tmpData, fit.macro$beta[1,], -#' fit.macro$resid.variance[1], tail.prob=0.05) +#' fit.macro$resid.variance[1], tail.prob=0.05, +#' VaR.method="historical" ) #' #' # fundamental factor model #' # try to find factor contribution to ES for STI @@ -68,7 +67,8 @@ #' colnames(tmpData)[c(1,length(tmpData[1,]))] = c("STI", "residual") #' factorModelEsDecomposition(tmpData, #' fit.fund$beta["STI",], -#' fit.fund$resid.variance["STI"], tail.prob=0.05,VaR.method = "HS) +#' fit.fund$resid.variance["STI"], tail.prob=0.05, +#' VaR.method = "historical" ) #' #' @export #' Modified: pkg/FactorAnalytics/R/factorModelSdDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelSdDecomposition.R 2013-08-02 22:32:44 UTC (rev 2708) +++ pkg/FactorAnalytics/R/factorModelSdDecomposition.R 2013-08-02 22:47:14 UTC (rev 2709) @@ -8,13 +8,12 @@ #' @param factor.cov k x k factor excess return covariance matrix. #' @param sig2.e scalar, residual variance from factor model. #' @return an S3 object containing -#' @returnItem sd.fm Scalar, std dev based on factor model. -#' @returnItem mcr.fm (K+1) x 1 vector of factor marginal contributions to risk -#' (sd). -#' @returnItem cr.fm (K+1) x 1 vector of factor component contributions to risk -#' (sd). -#' @returnItem pcr.fm (K+1) x 1 vector of factor percent contributions to risk -#' (sd). +#' \itemize{ +#' \item{sd.fm} Scalar, std dev based on factor model. +#' \item{mcr.fm} (K+1) x 1 vector of factor marginal contributions to risk sd. +#' \item{cr.fm} (K+1) x 1 vector of factor component contributions to risk sd. +#' \item{pcr.fm} (K+1) x 1 vector of factor percent contributions to risk sd. +#' } #' @author Eric Zivot and Yi-An Chen #' @examples #' Modified: pkg/FactorAnalytics/R/factorModelVaRDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-02 22:32:44 UTC (rev 2708) +++ pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-02 22:47:14 UTC (rev 2709) @@ -21,17 +21,16 @@ #' one of "modified","gaussian","historical", "kernel". computation is done with the \code{VaR} #' in the PerformanceAnalytics package. #' @return an S3 object containing -#' @returnItem VaR.fm Scalar, bootstrap VaR value for fund reported as a +#' \itemize{ +#' \item{VaR.fm} Scalar, bootstrap VaR value for fund reported as a #' positive number. -#' @returnItem n.exceed Scalar, number of observations beyond VaR. -#' @returnItem idx.exceed n.exceed x 1 vector giving index values of +#' \item{n.exceed} Scalar, number of observations beyond VaR. +#' \item{idx.exceed} n.exceed x 1 vector giving index values of #' exceedences. -#' @returnItem mVaR.fm (K+1) x 1 vector of factor marginal contributions to -#' VaR. -#' @returnItem cVaR.fm (K+1) x 1 vector of factor component contributions to -#' VaR. -#' @returnItem pcVaR.fm (K+1) x 1 vector of factor percent contributions to -#' VaR. +#' \item{mVaR.fm} (K+1) x 1 vector of factor marginal contributions to VaR. +#' \item{cVaR.fm} (K+1) x 1 vector of factor component contributions to VaR. +#' \item{pcVaR.fm} (K+1) x 1 vector of factor percent contributions to VaR. +#' } #' @author Eric Zivot and Yi-An Chen #' @references 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A #' General Analysis", The Journal of Risk 5/2. 2. Yamai and Yoshiba (2002). @@ -52,7 +51,7 @@ #' colnames(tmpData)[c(1,4)] = c("HAM1", "residual") #' factor.VaR.decomp.HAM1 = factorModelVaRDecomposition(tmpData, fit.macro$beta[1,], #' fit.macro$resid.variance[1], tail.prob=0.05, -#' VaR.method="historical) +#' VaR.method="historical") #' #' @export factorModelVaRDecomposition <- Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 22:32:44 UTC (rev 2708) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 22:47:14 UTC (rev 2709) @@ -26,10 +26,20 @@ Valid choices are one of "modified","gaussian","historical", "kernel". computation is done with the \code{VaR} in the PerformanceAnalytics - package. package.} + package.} } \value{ - A list with the following components: + A list with the following components: \itemize{ + \item{VaR} {Scalar, nonparametric VaR value for fund + reported as a positive number.} \item{n.exceed} Scalar, + number of observations beyond VaR. \item{idx.exceed} + \code{n.exceed x 1} vector giving index values of + exceedences. \item{ES scalar} nonparametric ES value for + fund reported as a positive number. \item{mcES} + \code{(K+1) x 1} vector of factor marginal contributions + to ES. \item{cES} \code{(K+1) x 1} vector of factor + component contributions to ES. \item{pcES} \code{(K+1) x + 1} vector of factor percent contributions to ES. } } \description{ Compute the factor model factor expected shortfall (ES) @@ -59,7 +69,8 @@ residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) colnames(tmpData)[c(1,4)] = c("HAM1", "residual") factor.es.decomp.HAM1 = factorModelEsDecomposition(tmpData, fit.macro$beta[1,], - fit.macro$resid.variance[1], tail.prob=0.05) + fit.macro$resid.variance[1], tail.prob=0.05, + VaR.method="historical" ) # fundamental factor model # try to find factor contribution to ES for STI @@ -70,7 +81,8 @@ colnames(tmpData)[c(1,length(tmpData[1,]))] = c("STI", "residual") factorModelEsDecomposition(tmpData, fit.fund$beta["STI",], - fit.fund$resid.variance["STI"], tail.prob=0.05,VaR.method = "HS) + fit.fund$resid.variance["STI"], tail.prob=0.05, + VaR.method = "historical" ) } \author{ Eric Zviot and Yi-An Chen. Modified: pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd 2013-08-02 22:32:44 UTC (rev 2708) +++ pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd 2013-08-02 22:47:14 UTC (rev 2709) @@ -15,7 +15,12 @@ model.} } \value{ - an S3 object containing + an S3 object containing \itemize{ \item{sd.fm} Scalar, + std dev based on factor model. \item{mcr.fm} (K+1) x 1 + vector of factor marginal contributions to risk sd. + \item{cr.fm} (K+1) x 1 vector of factor component + contributions to risk sd. \item{pcr.fm} (K+1) x 1 vector + of factor percent contributions to risk sd. } } \description{ Compute factor model factor risk (sd) decomposition for Modified: pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-02 22:32:44 UTC (rev 2708) +++ pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-02 22:47:14 UTC (rev 2709) @@ -26,7 +26,15 @@ package.} } \value{ - an S3 object containing + an S3 object containing \itemize{ \item{VaR.fm} Scalar, + bootstrap VaR value for fund reported as a positive + number. \item{n.exceed} Scalar, number of observations + beyond VaR. \item{idx.exceed} n.exceed x 1 vector giving + index values of exceedences. \item{mVaR.fm} (K+1) x 1 + vector of factor marginal contributions to VaR. + \item{cVaR.fm} (K+1) x 1 vector of factor component + contributions to VaR. \item{pcVaR.fm} (K+1) x 1 vector of + factor percent contributions to VaR. } } \description{ Compute factor model factor VaR decomposition based on @@ -56,7 +64,7 @@ colnames(tmpData)[c(1,4)] = c("HAM1", "residual") factor.VaR.decomp.HAM1 = factorModelVaRDecomposition(tmpData, fit.macro$beta[1,], fit.macro$resid.variance[1], tail.prob=0.05, - VaR.method="historical) + VaR.method="historical") } \author{ Eric Zivot and Yi-An Chen From noreply at r-forge.r-project.org Sat Aug 3 01:09:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 01:09:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2710 - in pkg/FactorAnalytics: R data man Message-ID: <20130802230910.9BDAC1851F6@r-forge.r-project.org> Author: chenyian Date: 2013-08-03 01:09:10 +0200 (Sat, 03 Aug 2013) New Revision: 2710 Added: pkg/FactorAnalytics/data/Stock.df.RData pkg/FactorAnalytics/man/Stock.df.Rd Removed: pkg/FactorAnalytics/data/stock.RDATA pkg/FactorAnalytics/man/stock.Rd Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/data/ pkg/FactorAnalytics/man/ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd Log: debug Stock.df.RData Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 22:47:14 UTC (rev 2709) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 23:09:10 UTC (rev 2710) @@ -60,6 +60,12 @@ #' #' # fundamental factor model #' # try to find factor contribution to ES for STI +#' data(stock) +#' fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") +#' , data=stock,returnsvar = "RETURN",datevar = "DATE", +#' assetvar = "TICKER", +#' wls = TRUE, regression = "classic", +#' covariance = "classic", full.resid.cov = FALSE) #' idx <- fit.fund$data[,fit.fund$assetvar] == "STI" #' asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] #' tmpData = cbind(asset.ret, fit.fund$factors, Property changes on: pkg/FactorAnalytics/data ___________________________________________________________________ Modified: svn:ignore - CRSP.RDATA factorAnalytics-Ex.ps + CRSP.RDATA factorAnalytics-Ex.ps stock.RDATA Added: pkg/FactorAnalytics/data/Stock.df.RData =================================================================== (Binary files differ) Property changes on: pkg/FactorAnalytics/data/Stock.df.RData ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Deleted: pkg/FactorAnalytics/data/stock.RDATA =================================================================== (Binary files differ) Property changes on: pkg/FactorAnalytics/man ___________________________________________________________________ Modified: svn:ignore - CornishFisher.Rd Style.Rd covEWMA.Rd factorModelPerformanceAttribution.Rd impliedFactorReturns.Rd modifiedEsReport.Rd modifiedIncrementalES.Rd modifiedIncrementalVaR.Rd modifiedPortfolioEsDecomposition.Rd modifiedPortfolioVaRDecomposition.Rd modifiedVaRReport.Rd nonparametricEsReport.Rd nonparametricIncrementalES.Rd nonparametricIncrementalVaR.Rd nonparametricPortfolioEsDecomposition.Rd nonparametricPortfolioVaRDecomposition.Rd nonparametricVaRReport.Rd normalEsReport.Rd normalIncrementalES.Rd normalIncrementalVaR.Rd normalPortfolioEsDecomposition.Rd normalPortfolioVaRDecomposition.Rd normalVaRReport.Rd plot.FM.attribution.Rd plot.MacroFactorModel.Rd print.MacroFactorModel.Rd scenarioPredictions.Rd scenarioPredictionsPortfolio.Rd summary.FM.attribution.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd + CornishFisher.Rd Style.Rd covEWMA.Rd factorModelPerformanceAttribution.Rd impliedFactorReturns.Rd modifiedEsReport.Rd modifiedIncrementalES.Rd modifiedIncrementalVaR.Rd modifiedPortfolioEsDecomposition.Rd modifiedPortfolioVaRDecomposition.Rd modifiedVaRReport.Rd nonparametricEsReport.Rd nonparametricIncrementalES.Rd nonparametricIncrementalVaR.Rd nonparametricPortfolioEsDecomposition.Rd nonparametricPortfolioVaRDecomposition.Rd nonparametricVaRReport.Rd normalEsReport.Rd normalIncrementalES.Rd normalIncrementalVaR.Rd normalPortfolioEsDecomposition.Rd normalPortfolioVaRDecomposition.Rd normalVaRReport.Rd plot.FM.attribution.Rd plot.MacroFactorModel.Rd print.MacroFactorModel.Rd scenarioPredictions.Rd scenarioPredictionsPortfolio.Rd stock.Rd summary.FM.attribution.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd Added: pkg/FactorAnalytics/man/Stock.df.Rd =================================================================== --- pkg/FactorAnalytics/man/Stock.df.Rd (rev 0) +++ pkg/FactorAnalytics/man/Stock.df.Rd 2013-08-02 23:09:10 UTC (rev 2710) @@ -0,0 +1,21 @@ +\docType{data} +\name{Stock.df} +\alias{Stock.df} +\alias{stock} +\title{constructed NYSE 447 assets from 1996-01-01 through 2003-12-31.} +\description{ + constructed NYSE 447 assets from 1996-01-01 through + 2003-12-31. +} +\details{ + Continuous data: PRICE, RETURN, TICKER, VOLUME, SHARES.OUT, + MARKET.EQUITY,LTDEBT, NET.SALES, COMMON.EQUITY, + NET.INCOME, STOCKHOLDERS.EQUITY, LOG.MARKETCAP, + LOG.PRICE, BOOK2MARKET Categorical data: GICS, + GICS.INDUSTRY, GICS.SECTOR +} +\references{ + Guy Yullen and Yi-An Chen +} +\keyword{datasets} + Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 22:47:14 UTC (rev 2709) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 23:09:10 UTC (rev 2710) @@ -74,6 +74,12 @@ # fundamental factor model # try to find factor contribution to ES for STI +data(stock) +fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") + , data=stock,returnsvar = "RETURN",datevar = "DATE", + assetvar = "TICKER", + wls = TRUE, regression = "classic", + covariance = "classic", full.resid.cov = FALSE) idx <- fit.fund$data[,fit.fund$assetvar] == "STI" asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] tmpData = cbind(asset.ret, fit.fund$factors, Deleted: pkg/FactorAnalytics/man/stock.Rd =================================================================== --- pkg/FactorAnalytics/man/stock.Rd 2013-08-02 22:47:14 UTC (rev 2709) +++ pkg/FactorAnalytics/man/stock.Rd 2013-08-02 23:09:10 UTC (rev 2710) @@ -1,21 +0,0 @@ -\docType{data} -\name{stock} -\alias{stock} -\title{constructed NYSE 447 assets from 1996-01-01 through 2003-12-31.} -\description{ - constructed NYSE 447 assets from 1996-01-01 through - 2003-12-31. -} -\details{ - Continuous data: PRICE, RETURN, VOLUME, SHARES.OUT, - MARKET.EQUITY,LTDEBT, NET.SALES, COMMON.EQUITY, - NET.INCOME, STOCKHOLDERS.EQUITY, LOG.MARKETCAP, - LOG.PRICE, BOOK2MARKET Categorical data: GICS, - GICS.INDUSTRY, GICS.SECTOR Identi cation data: DATE, - PERMNO, TICKER.x -} -\references{ - Guy Yullen and Yi-An Chen -} -\keyword{datasets} - From noreply at r-forge.r-project.org Sat Aug 3 01:14:30 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 01:14:30 +0200 (CEST) Subject: [Returnanalytics-commits] r2711 - in pkg/FactorAnalytics: R man Message-ID: <20130802231430.61BB9185670@r-forge.r-project.org> Author: chenyian Date: 2013-08-03 01:14:29 +0200 (Sat, 03 Aug 2013) New Revision: 2711 Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r pkg/FactorAnalytics/R/print.FundamentalFactorModel.r pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd Log: debug examples for fitFundamentalFactorModel related function Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-02 23:14:29 UTC (rev 2711) @@ -60,7 +60,7 @@ #' #' # fundamental factor model #' # try to find factor contribution to ES for STI -#' data(stock) +#' data(stock.df) #' fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") #' , data=stock,returnsvar = "RETURN",datevar = "DATE", #' assetvar = "TICKER", Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-02 23:14:29 UTC (rev 2711) @@ -64,7 +64,7 @@ #' #' \dontrun{ #' # BARRA type factor model -#' data(stock) +#' data(stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") #' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r 2013-08-02 23:14:29 UTC (rev 2711) @@ -43,11 +43,8 @@ #' #' \dontrun{ #' # BARRA type factor model +#' data(stock.df) #' # there are 447 assets -#' data(stock) -#' # BARRA type factor model -#' data(stock) -#' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") #' fit.fund <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, #' datevar = "DATE", returnsvar = "RETURN", Modified: pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r 2013-08-02 23:14:29 UTC (rev 2711) @@ -13,7 +13,23 @@ #' @method predict FundamentalFactorModel #' @export #' @author Yi-An Chen +#' @examples +#' data(stock.df) +#' fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") +#' , data=stock,returnsvar = "RETURN",datevar = "DATE", +#' assetvar = "TICKER", +#' wls = TRUE, regression = "classic", +#' covariance = "classic", full.resid.cov = FALSE) +#' # If not specify anything, predict() will give fitted value +#' predict(fit.fund) #' +#' # generate random data +#' testdata <- data[,c("DATE","TICKER")] +#' testdata$BOOK2MARKET <- rnorm(n=42465) +#' testdata$LOG.MARKETCAP <- rnorm(n=42465) +#' predict(fit.fund,testdata,new.assetvar="TICKER",new.datevar="DATE") +#' +#' predict.FundamentalFactorModel <- function(object,newdata,new.assetvar,new.datevar){ # if there is no newdata provided Modified: pkg/FactorAnalytics/R/print.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/print.FundamentalFactorModel.r 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/R/print.FundamentalFactorModel.r 2013-08-02 23:14:29 UTC (rev 2711) @@ -9,7 +9,7 @@ #' @author Yi-An Chen. #' @examples #' -#' data(stock) +#' data(stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") #' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r 2013-08-02 23:14:29 UTC (rev 2711) @@ -9,7 +9,7 @@ #' @author Yi-An Chen. #' @examples #' -#' data(stock) +#' data(stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") #' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-02 23:14:29 UTC (rev 2711) @@ -74,7 +74,7 @@ # fundamental factor model # try to find factor contribution to ES for STI -data(stock) +data(stock.df) fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") , data=stock,returnsvar = "RETURN",datevar = "DATE", assetvar = "TICKER", Modified: pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-02 23:14:29 UTC (rev 2711) @@ -89,7 +89,7 @@ \examples{ \dontrun{ # BARRA type factor model -data(stock) +data(stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd 2013-08-02 23:14:29 UTC (rev 2711) @@ -55,11 +55,8 @@ \examples{ \dontrun{ # BARRA type factor model +data(stock.df) # there are 447 assets -data(stock) -# BARRA type factor model -data(stock) -# there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") fit.fund <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, datevar = "DATE", returnsvar = "RETURN", Modified: pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd 2013-08-02 23:14:29 UTC (rev 2711) @@ -27,6 +27,22 @@ asset variable and exact exposures names that are used in fit object by \code{fitFundamentalFactorModel} } +\examples{ +data(stock.df) +fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") + , data=stock,returnsvar = "RETURN",datevar = "DATE", + assetvar = "TICKER", + wls = TRUE, regression = "classic", + covariance = "classic", full.resid.cov = FALSE) +# If not specify anything, predict() will give fitted value +predict(fit.fund) + +# generate random data +testdata <- data[,c("DATE","TICKER")] +testdata$BOOK2MARKET <- rnorm(n=42465) +testdata$LOG.MARKETCAP <- rnorm(n=42465) +predict(fit.fund,testdata,new.assetvar="TICKER",new.datevar="DATE") +} \author{ Yi-An Chen } Modified: pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd 2013-08-02 23:14:29 UTC (rev 2711) @@ -19,7 +19,7 @@ fitFundamentalFactorModel. } \examples{ -data(stock) +data(stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd 2013-08-02 23:09:10 UTC (rev 2710) +++ pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd 2013-08-02 23:14:29 UTC (rev 2711) @@ -19,7 +19,7 @@ fitFundamentalFactorModel. } \examples{ -data(stock) +data(stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, From noreply at r-forge.r-project.org Sat Aug 3 11:01:22 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 11:01:22 +0200 (CEST) Subject: [Returnanalytics-commits] r2712 - in pkg/PerformanceAnalytics/sandbox/pulkit: week3_4/vignette week6 Message-ID: <20130803090122.ED645185905@r-forge.r-project.org> Author: pulkit Date: 2013-08-03 11:01:22 +0200 (Sat, 03 Aug 2013) New Revision: 2712 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/vignette/TriplePenance.pdf Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R Log: changes in CDaR single path Added: pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/vignette/TriplePenance.pdf =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/vignette/TriplePenance.pdf (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/vignette/TriplePenance.pdf 2013-08-03 09:01:22 UTC (rev 2712) @@ -0,0 +1,1658 @@ +%PDF-1.4 +%???? +1 0 obj << +/Length 126 +>> +stream +concordance:TriplePenance.tex:TriplePenance.Rnw:1 32 1 1 5 3 4 1 1 1 4 32 1 1 2 1 0 1 1 17 0 1 2 14 1 1 2 1 0 1 1 15 0 1 2 9 1 +endstream +endobj +4 0 obj << +/Length 2287 +/Filter /FlateDecode +>> +stream +x??Ym???~?b???.?|??0???Qp?$????{I????????>Crw??R???gq???p??g??o???z?U%Zf????M%?b?????)???e??????????x??????-??6c^??7????>????~?F?WL*#h%?s?j??&&l??sT?+4]?????????Gm?W?e]?q?? +s0?!??Dj7???8??a9???>???A?? n??% +???d?n?A??&Hxx ????6H??%8?a?(?J?U?AB?U???@????????uZ?,????????s???~?(i{?x???????w?C??? K????Ql?wCX???[????E???nT>yc +???E/:???$E?'????????4?a?Sj?????G2q ??`&a??4y?????r??a?v????? +?????X??O??,?o?yl>W???7.????7??S.V????`j6???QBuS7~}???????|,8 UR?Rs?o???"m.????D??|?c +O7ix&%^???m???e???????X5?8e?;?? ???"??3?"-s?????E???/?T3q?????l??x???\?o???e?A??Y#???I??/7i?? +endstream +endobj +3 0 obj << +/Type /Page +/Contents 4 0 R +/Resources 2 0 R +/MediaBox [0 0 612 792] +/Parent 16 0 R +>> endobj +2 0 obj << +/Font << /F54 5 0 R /F20 6 0 R /F61 7 0 R /F66 8 0 R /F71 9 0 R /F38 10 0 R /F39 11 0 R /F1 12 0 R /F36 13 0 R /F42 14 0 R /F41 15 0 R >> +/ProcSet [ /PDF /Text ] +>> endobj +19 0 obj << +/Length 2135 +/Filter /FlateDecode +>> +stream +x??X?o?????P???0???]4 ?D???*??????@I$?X"m????????N'KA>??????U????z>??????_?.?????? t?????T??A3????o>1??????Pg?WSo??r?C ?D??\?3????l'???S?E?n_p?????y????-b???L`????????q?B?Zw???????A??&??????wQ^??|Ka??????~??>???$???? +??7dv?Ki????A8??X?opC?o?#t???2??|?A????????v????[?? -fC??>bG??p!(?]??zm?W8zVR?m? +8??g????Z2??;b?????J?'?:?#VEU???2C?H???ppx???? ??H?$?!??$?l ?c??1D?Z??b?{?).??w[E?????.?Gq???_???V??+?v'F%c? ?z?V???????!rH"#?P h)JD?????lVH?7L??7u?D?G???@2G?h at Y?ln?7?aD`!?3"???yV??E; ????Ln7eL?_kS[?V}I[M;?W??cPl?M?pe? ??[?e????????R???7O??R\?????%??=????#*/?d?????*?r????4C?vy??????=????X?r??$[??q?>R?ebI?P?????ta???SsPA?-? +m??K?????L5?U^k?ae?c ???#????????W?w??V????ot??J1vu??:*v??????+??J??P}#???g?0 ??Y&g?? + +??????I:??e? +?<| ??> endobj +17 0 obj << +/Font << /F20 6 0 R /F38 10 0 R /F39 11 0 R /F41 15 0 R /F71 9 0 R /F83 20 0 R /F85 21 0 R /F42 14 0 R /F36 13 0 R /F1 12 0 R >> +/ProcSet [ /PDF /Text ] +>> endobj +24 0 obj << +/Length 996 +/Filter /FlateDecode +>> +stream +x??VKS?0??+2=93?X?e??v???i??@??Pz0?.I ??@}?%;@??BJd?C??????;8vj???$????c??R?G?f?b6??h?5??6? +?5??E5?Y|:8??H?(??t??P?H?????C?M0?N??b?_ I???2I?b??%?CM?l`}^?>,???9,+gh?=?k?} +?????[??m????8I?A???Y?@?Y??m%*?~?tAh? ??V??????&??8d??_4??L >|????h??5?L? ??2]QW&1?????=?c??/{0???????]v??Sy +9??X#?? Sl??? RRD??%AK$b??&cw?z??????`?O|??*????E???pw?????????##??"e?9! +?????^\(F????\+?cFGZs?K?q?m????C!\&?;?????-?? +?j????C??#?-?QO?????`???`??I+??H]?\?/??N?? ???e?g2b?b?1D6LWg?B?7??V???;????U??R ?h?????-?wm?i%k?#O???n?=????'? ~7dm????????????? +?n?S???X? 5???C?|???g2???? ?l?^? +??}H??}>A??y7????@9???F??4???-d#.Q|??H?K?O2?G?b???4???Y??8 O-???g>"?E?k??\??Z???ZAn"?h??{??, +endstream +endobj +23 0 obj << +/Type /Page +/Contents 24 0 R +/Resources 22 0 R +/MediaBox [0 0 612 792] +/Parent 16 0 R +>> endobj +22 0 obj << +/Font << /F71 9 0 R /F83 20 0 R /F85 21 0 R /F20 6 0 R >> +/ProcSet [ /PDF /Text ] +>> endobj +25 0 obj +[514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6 514.6] +endobj +26 0 obj +[525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525 525] +endobj +27 0 obj +[777.8 277.8 777.8 500 777.8 500 777.8 777.8 777.8 777.8 777.8 777.8 777.8 1000 500 500 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 1000 1000 777.8 777.8 1000 1000 500 500 1000 1000 1000 777.8 1000 1000 611.1 611.1 1000 1000 1000 777.8 275 1000 666.7 666.7 888.9 888.9 0 0 555.6 555.6 666.7 500 722.2 722.2 777.8 777.8 611.1 798.5 656.8 526.5 771.4 527.8 718.7 594.9 844.5 544.5 677.8 761.9 689.7 1200.9 820.5 796.1 695.6 816.7 847.5 605.6 544.6 625.8 612.8 987.8 713.3 668.3 724.7 666.7 666.7 666.7 666.7 666.7 611.1 611.1 444.4 444.4 444.4 444.4 500 500 388.9 388.9 277.8] +endobj +28 0 obj +[826.4 295.1 826.4 531.3] +endobj +29 0 obj +[413.2 413.2 531.3 826.4 295.1 354.2 295.1 531.3 531.3 531.3 531.3] +endobj +30 0 obj +[736.1 736.1 527.8 527.8 583.3 583.3 583.3 583.3 750 750 750 750 1044.4 1044.4 791.7 791.7 583.3 583.3 638.9 638.9 638.9 638.9 805.6 805.6] +endobj +31 0 obj +[682.4 596.2 547.3 470.1 429.5 467 533.2 495.7 376.2 612.3 619.8 639.2 522.3 467 610.1 544.1 607.2 471.5 576.4 631.6 659.7 694.5 660.7 490.6 632.1 882.1 544.1 388.9 692.4 1062.5 1062.5 1062.5 1062.5 295.1 295.1 531.3 531.3 531.3 531.3 531.3 531.3 531.3 531.3 531.3 531.3 531.3 531.3 295.1 295.1 826.4 531.3 826.4 531.3 559.7 795.8 801.4 757.3 871.7 778.7 672.4 827.9 872.8 460.7 580.4 896 722.6 1020.4 843.3 806.2 673.6 835.7 800.2 646.2 618.6 718.8 618.8 1002.4 873.9 615.8 720 413.2 413.2 413.2 1062.5 1062.5 434 564.4 454.5 460.2 546.7 492.9 510.4 505.6 612.3 361.7 429.7 553.2 317.1 939.8 644.7 513.5 534.8 474.4 479.5 491.3 383.7] +endobj +32 0 obj +[622.8 552.8 507.9 433.7 395.4 427.7 483.1 456.3 346.1 563.7 571.2 589.1 483.8 427.7 555.4 505 556.5 425.2 527.8 579.5 613.4 636.6 609.7 458.2 577.1 808.9 505 354.2 641.4 979.2 979.2 979.2 979.2 272 272 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 272 272 761.6 489.6 761.6 489.6 516.9 734 743.9 700.5 813 724.8 633.8 772.4 811.3 431.9 541.2 833 666.2 947.3 784.1 748.3 631.1 775.5 745.3 602.2 573.9 665 570.8 924.4 812.6 568.1 670.2 380.8 380.8 380.8 979.2 979.2 410.9 514 416.3 421.4 508.8 453.8 482.6 468.9 563.7 334 405.1 509.3 291.7 856.5 584.5 470.7 491.4 434.1 441.3 461.2 353.6 557.3 473.4 699.9 556.4 477.4 454.9] +endobj +33 0 obj +[312.5 562.5 562.5 562.5 562.5 562.5 562.5 562.5 562.5 562.5 562.5 562.5 312.5 312.5 342.6 875 531.3 531.3 875 849.5 799.8 812.5 862.3 738.4 707.2 884.3 879.6 419 581 880.8 675.9 1067.1 879.6 844.9 768.5 844.9 839.1 625 782.4 864.6 849.5 1162 849.5 849.5 687.5 312.5 581 312.5 562.5 312.5 312.5 546.9 625 500 625 513.3 343.7 562.5 625 312.5 343.7 593.8 312.5 937.5 625 562.5 625 593.8 459.5 443.8 437.5 625 593.8 812.5 593.8] +endobj +34 0 obj +[555.6 555.6 833.3 833.3 277.8 305.6 500 500 500 500 500 750 444.4 500 722.2 777.8 500 902.8 1013.9 777.8 277.8 277.8 500 833.3 500 833.3 777.8 277.8 388.9 388.9 500 777.8 277.8 333.3 277.8 500 500 500 500 500 500 500 500 500 500 500 277.8 277.8 277.8 777.8 472.2 472.2 777.8 750 708.3 722.2 763.9 680.6 652.8 784.7 750 361.1 513.9 777.8 625 916.7 750 777.8 680.6 777.8 736.1 555.6 722.2 750 750 1027.8 750 750 611.1 277.8 500 277.8 500 277.8 277.8 500 555.6 444.4 555.6 444.4 305.6 500 555.6 277.8 305.6 527.8 277.8 833.3 555.6 500 555.6 527.8 391.7 394.4 388.9 555.6 527.8 722.2 527.8 527.8] +endobj +35 0 obj +[869.4 818.1 830.6 881.9 755.5 723.6 904.2 900 436.1 594.4 901.4 691.7 1091.7 900 863.9 786.1 863.9 862.5 638.9 800 884.7 869.4 1188.9 869.4 869.4 702.8 319.4 602.8 319.4 575 319.4 319.4 559 638.9 511.1 638.9 527.1 351.4 575 638.9 319.4 351.4 606.9 319.4 958.3 638.9 575 638.9 606.9 473.6 453.6 447.2] +endobj +36 0 obj +[544 544 816 816 272 299.2 489.6 489.6 489.6 489.6 489.6 734 435.2 489.6 707.2 761.6 489.6 883.8 992.6 761.6 272 272 489.6 816 489.6 816 761.6 272 380.8 380.8 489.6 761.6 272 326.4 272 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 489.6 272 272 272 761.6 462.4 462.4 761.6 734 693.4 707.2 747.8 666.2 639 768.3 734 353.2 503 761.2 611.8 897.2 734 761.6 666.2 761.6 720.6 544 707.2 734 734 1006 734 734 598.4 272 489.6 272 489.6 272 272 489.6 544 435.2 544 435.2 299.2 489.6 544 272 299.2 516.8 272 816 544 489.6 544 516.8 380.8 386.2 380.8 544 516.8 707.2 516.8 516.8 435.2] +endobj +37 0 obj +[628.2 719.8 680.5 510.9 667.6 693.3 693.3 954.5 693.3 693.3 563.1 249.6 458.6 249.6 458.6 249.6 249.6 458.6 510.9 406.4 510.9 406.4 275.8 458.6 510.9 249.6 275.8 484.7 249.6 772.1 510.9 458.6 510.9 484.7 354.1 359.4 354.1 510.9] +endobj +38 0 obj << +/Length1 1478 +/Length2 8518 +/Length3 0 +/Length 9504 +/Filter /FlateDecode +>> +stream +x???PZ6????@qww+????{??-.??X??w?mq?G{?}??g?o2???}v?<+g?@???&e??C?`l\???5i.N'';''7???G??????!N????????'?,?T?8??\<.~a.aNN7'???@T t[???'?+? ?? +???=???`?dp ??qH9??`K?@ +?9>e?:?!?`???B0???`???@GWv?F??????@? ?;? +??2@???;@????Ab +?BA?'????????d?????T??AN?U???.????_???????hi qt:y??l?`???*;?? +:Y?\!O?@w ?h??su @^J|b?7?WK(???? +v????w??2?9Y?@AN0W????CA?Ou??????N'?$k????oVn??N`7?????'??u6 ????S?_r?<-m9~'??r?1r?V?q??q?8??h?????? W?;????|????? +l X?l?N????Y?%?? +?q>?????_'?? ??89x??????*Z?j/???/??4?????`???pq???~??_????????#???5 ?????C????`?{m???A?4? ?????????????y ???????(???????98??3??????`??O??{? +5???8?/T??BKC????>m???????v?{??4?0K?????.<w;?4 ????????lO;gi????>??? ??R??R??b?{?????B?^O???>\OKj??3?v'???D?` +?b??(??C????$$??[pX?[pX?K???p@?C|???"???G???Z?A?O??g????#?y6@ O?%??,?R$??6???F???mgLl?aG????g???5??:???R*u?oyK??Br??????5?5Y?????,Qkr? +c~????H??? +??MGr????W/???S?!??MG?????W????li$lvGs??_??l?-V7?8??+C?E? )- +?? +????'?????g?????_b?????1\?????^???v?"?'3$?B?x62??#??V?d???C????;????????=2Z +RhH)?XV?j'}W??e?A?r\?????'??D?:??n??,?p?@G??do??S?????N??xRa???1???b?9???7(??IU?Tk?U?E?W?S???~rZC???E>F}????\??!???~?9? +?bz +?h??Yh??'?~{x??`;d?PCxA???T?7]??????l?n?~???]e????? +-C?L?%NT/IIj6???F??j????P^?IH?N??????}???]????????%?N?#?+j?[2?3?I?RE?E??Y q??w?9$???C",?;|??????????\???i???Y?)P???S? +$??p?-'GmQ*8?0?(G ??Dg4?3/&?,??X??r?R?H{A??????=5?c%?U?^??&t?(n?H????????x???]??s???)>S??*?>?G?U?6?6?/? +z??_?:?????$?L???????cWH? h??^?????>?Y???ia?%_?3H????N$?J?.???%?q9U? +?n???>?????4?ILVm?m?mw?/J?*???kM4?C???q6WZz???8-??M????q,??7~M?On?2HW.???9F-??1?x-????j?f??????7??????B????Q?:?_2??g??????=r?"??3=J???O?s????_?H? +'???h????????KR?#??%C??!?"????+?_??????????V???1??d|?? ?kFM???o?^D2Fe?1? +????????!C?|?R????h???tT?vR/???????????????{??x?u?q*?.Wah??}????^F?.;?\?L?O.???-??mo??Ab?M???????g?S?? ]????|??????,?b?8?.?o????????Xgb-nE?d???Y??????????%G'???[?????;?R#?$??o}????`v?h<}j+^??BvY7$ ??$wo?5?????'???/?G???K????7V!??e "Q? d ????t?SE?+?k? ?"x??????r ?8 2S???Y? ,?C?%z:??d?s1??Td??;-????t????s ????^?O{5????iB?i?Q??Az`s;-???7F?{(`??6??E\?S ?[????mm%?B??.????j??'6E?:??X;?|+I n????V??Gq? ?X???h??'?m????????*?p%?k?A?N?? +?h(?*$?? +K??UW?Z???v:O?w93?J???u?N??W n???z?e?-???q??~?JP#???L?TLl?z??6 +?d?q??`]UP?3????y??,???B0p?e?{6C???S}L?????g` ????h??i)V??? +??L????LTQ?+?j?i??;?9?]?c%?gU??Z?? +?????(?A???????J???g?9??F?????? _u)??? ??A??,?5?N? +?E?O=???????tk]2?????wd?(v-??nC??Pl???m?? ???4 T???]????K??????/?[?r?:??6??3>?"5??H33????} ?i??j?????t\????b??,%???3??sauOO&AR?O?ym?F.0K?D?wH.???+K8?\\RB?yMg'?7q;'es)??BX[E?]>?C?Y?? +i?????? ?,?m.?o?H?mS.??G????Te?m?0??,??#nU[{]??S?{v|??B?e?Zzu??{? ?? N? ??5??S?a?o?/!E???7Go???????E??J??M? ???^PX?yxy?*??????:t???S!?c?u???.??????M????????E??E?'i????>??#$?V??1?? D?+?oH?L???j?0???????c????t?v5???z?{$??4?????O????O2og?x?? +~?0X?b???G?=?\f??^.???? n5nmlX?x?????]??/= ;?a2??????:b????o??+?#U?y?P???e?0J???v[????%A?x???7^???2S ???o:uKR???v?#??9???$?6X9]??K??????$T??????o???b +???9?n???? +?W ???Go?.???eT}3?k6??>T???<\??I?e?A?R???6?????????6G(ja????g~3?? ?]??u?????-??t??C?.??':x???:?#??$%??? ?????|"?7?&?t*???d?????r"`?T??Gp?e9??H?!"o????ToT?Sdg???"??#Oq?????i??F6? ,?w????a,???R .?E:#f??9v??Z??J?T?fO? ??M:X???J??#Sj?????? ?5F?'?????f?????\??????5GG??th(I??S?&?x???y]Q?p??Z???F?)????N????1??c????_?vSl]??`??????u;`??$?T?? ????Z?~!vq??????C3??K? +?????B??B at M?&{J??uh?????^?d?7?*?\h??????s? +? +oq>????#??zpn*??He;,y>M>??y???????C-?????xE`!'?Ei????V??ht?.???Go?Y????hkKg*??[?]+???????"8K?4?????=??ua2Q??W?K!?7??y????`??w?? +??^?:??D??N7?\"??e? n_???m?[1?b?r??? ?F???D???t !O??u6;???rJ|f.|??g?VB?VF?b]?1??2N?? ???t??Q??????????I???????4C????x??_?? ????Z?????\=W???) ,??3???O"?/?;???????"tj????y T????]??????>??P?C1?H?a?T??:?}X?ob??F?R>P???????n?aS.??-? ?????;?Nm?ml?9o75?w?g????Z?????H???2G?j???? Px???f??}????t +????@"@/???k?W>?(D??y??7P?P?)*??Y"f/?2???h'Y??KiI?[(?x???b??y^??]q?OKg?? A?.????t??B?????7 ??B?cg??? ?=?Q????U?????"?????????9f?qj?{fYv??Y?Y???jk^?K????I???l4???fZ?h?#n?.?>?[?K? 4?-????= +?u|???$OfA???^?d?????M??????h?Y?,1??OM?????4?K3*?CS??W?L;?? ??9???-?????^??z~?????? ??RX?zZL?2?h?qm?$???#^]`_Y?????@???J?7??????()?9???9???#1?W7ID??~???B??L?? +??????RkF?A???????????K?D??I???? +f???y???g?T????h??r??j?}??ig??????????=o?xy???^???P??{??ch?&t?qP?h'????.?n* e5en?nh?UZO?)`?Mb??T?/%*????%V?)QMp`?C??3?Pz?,??S?U??bn? Q???[[??m?K-?8xI)(????}?6??2n?????`??s?<?B[??=?d]? %??h ???=8uD/@??ke?O7?nr?~???h?S???V{E?e;???9?y??mydj?C????E??????O?9#@??q?+V-|Azb??U?bDq????Q?2t??&X?p??c? ?zQ?%p )u???)a?????v??? W+?????~?????JI??v???????=o?O?em??~ "?????}(?K??;??????l{?IVs5????(y +? ???? -?2*????/??EH{????,?E;$??s??H??p??'B????"??56??? 06_??yt=\0?~v? +??}??[??l +???6??j!?a??K?(pY??m??b?1???E0?????V?VU??x+'L??????2?C?6??.?Y?,E]+????`~??Vg??Q??}|)?ew$O??;????Q??]?E{|??+o%t??????R?????.?????p????{????o???5O????z(???x?'?qH??M"+???????`[??}ERf???D??Mav?C?=Y?{U???Fy?? +???zis???t??_?L +\????=??Mt??y?????c?j?8??,?b??????q??k~|r??]+??d?>??=on?g?????A1?6gVV??{??K??C? +?@???[??&w]???<???-??&??&kDM?&??????|?`?BE?P?g\??a,??>?S????^! ??`L=????>V????sJ~???v??C??< ?h\=p?? .*??v???????/?s?x??[??P3j???-??P u?~???&?M?KKE????}E??A???x????-?9??G?{?P???????5?k +x*?C????D?js?yq????0?R??p???UE??> ?y nG& t??-i?\??????=Uo????$]?m?+?G????[???????+?d{???? {??????s???S?a??h????D????u?? ?pY? ????0g??X??Z???????#w?WX?i?W%j?L??????]??????&v? ?7qik???xi?l!8? -??4?? P,? ??`?3?_??J??cI???,??&???S%??!9?p?N????*K?p+???8vc:M{??b??e???1?h=(}4{?g????s?+;M?v??.?mz}??}?g?[8Z?c?o??ZxP?#?`?J?? +?????????@????6???+???3????H~W^?7A^x??_??/|-??e???>? ?H?/????HEQ'??1\????????HQd?;L^?????'? d?N?2?P?L????;`??P??v+;??K,^?S?>??U????I??????a???*?tn ???g1}Mu?q?(?I:?o?(q??D?r?sm????)}X???????K_?_??O???6x\X?F]?/????{?f|??C4h??Z)??P????i??????y?h???????_??/???1LDt?????????Gk?5?^C??B??1?? +?????;?u?'SQ ?F?T???(J???o??Q??$?l?D??C?w???????~?}q? ??FJ??r???? 0'r~?t??y/?????,1?+??DEG?bG?]?CE????`?"B)?qG?k-T??m'!??Pi??A? ?&??[,zU?m??8x?8kU??B?????[?aAQ ?g??????R?{]????r???????B?v???????F??sFA?!?]?/???M?H????C"VHi??6?????????!1??i?}?h??Y0+Si}?d%&?Y??129w?o?????]?D??????%#?#W???=??q??/???u???*7?`?????p\)??H?n?G???????#?????` ?j??Zo????}???ze??? ???Sz??Z??Z????I???7?.???x!K????Fj????fu?af??L?>.???"R?????;N???#f_??te??Fi??`&?C?J?t?f????j?AD?????w???? +5$s??e%??u?????z???????[?"B?4???2?x\??p?B +?????]?@??1"??h' +endstream +endobj +39 0 obj << +/Type /FontDescriptor +/FontName /QZKRDM+CMBX10 +/Flags 4 +/FontBBox [-56 -250 1164 750] +/Ascent 694 +/CapHeight 686 +/Descent -194 +/ItalicAngle 0 +/StemV 114 +/XHeight 444 +/CharSet (/A/a/b/c/r/s/t) +/FontFile 38 0 R +>> endobj +40 0 obj << +/Length1 1826 +/Length2 11439 +/Length3 0 +/Length 12581 +/Filter /FlateDecode +>> +stream +x???T??? Cq?????Nq-?R? ww??w??K??.?K???c?????????[Y+?o?>g?CK???*a1?B??Y9?8?R??z?\n6.TZZ-??-??rTZ? ????t~?I?_ ?!?['7??O??_?????!?_CT +t??? +{?*??? +??r~????3F?? ???? ;l?(??@v/???M?????!????????????vNl??(# ? +?l?9??? s?-T?v??ZcC?hY??????X8?? ???l?wzqq?7A/???JU?????c???p?l??????@`????ff;????`?Te?????Y@{?? ??N??+l 4}1??t @VB|???????`g'6'??=?????e??? vv {g'????CAf/????????C????K`{s???0wq`??;??????y??#?9x988?? G??????Z?????_z??r?8,^???-@/??^N at W??????? ??`6s??,????D?,??/?? 8^???????F/f???????+f?|??+???W?+%%!?/V^n+/'???[??????8????S??U????- ??4?rz?m????`?km??A?2? ???o???a?????y ?t??7?D???[??????z??????v`[??,^????e7?!/b?MuA?YhI??????;_6D?????c;???A?j`g3?? ?o?%?-??q???X998???e??l^????SzY??M)co1?c??x?@(????2`\??/??%5??9?v6{??? ??9????????%??????'?]???+?M?v??????????^?t?&??C?v?????????/?A?B.???????K V????????l??/y??A???????????????a????/? ?W??/??????N?????e?????"??-???_/???/|????z1w??g?\???W??]}????O2?2C]???? +?? n??? uc?????Mad?Z????a"%1V~ ? ??H????)?p)?D??u?\?????z??`?1????0A??? Q?G?B??%??????`? ??@???"????{??+?^?W?c$tn[}??O??t???v?a@? m?i?,?3+92??;????4N??3?B<3???G?/?5???Y??2-.?.bb}"r?K??I:/??d?y???X???B r??F?v,?#O5e??p????0??I?T)??dh????????fVNv-??#?I??z??#?gT??h?X????WXjO????K1??????y????>A?)???g???.?g?n/?;?Q?+???>??r???Z??D?&`?????9?(??"~8?????y??%-?8?J?z???B?'?s?J??Y???????8???yG?m???R?dL???B?=???}$5?fn????? IS??2e?@Hp +e1???KO?fyQt?F.4V7Y2??o^)i??=)_:q???-\???8?ZZ??3?+?nb^W???y??|? ?p????!U?h0????L%p?3}#?:]?Q????*w??9E??????&?CM?V?????? +?T???????Yv??&???3bg?&?N?%??j???h1???s??k?G{>??;??_?m ?U??@??D\?Jbf???sZ??f???|??(??fP?X??B??&?0$~?PBO??/??'???>?"/?????wa???? +f?N??C????:????L?s?t??#q?l??;?~C????1?\???????S?D? *??s??h +????O???8#?OP??A?wD?z????]ks0???O??u?I?m??*?60?a(?R???pd^???cJ?.?O?e????:1??%H?\ +????b?Y?A?$????l??O#l?-????N???GM????i???[n??Op`?V????u?? + +????L6?nH??????>&+Jk at 1?Da?>??*?dqF????k???X???1'??B??{M~?NW???Eb???????f?@t?9?E??$?????M&???d?U????)?X^X???y?i?UXr\MTl????QI???f?\aF?????j???y?4????AvqI????SfC;?1 +2?{>???l?qc???U???? +?O{V|n^Xf?"??>?#Xj0R????"??$??Z(?3_?sR?,J??????????g??)??/?iRW?b\? ??n3m???~-??;,&???YAX?j J????eW?T|??1?(???z?bf???M]?<1?*-(Z?ra?Tl?Tf???*Y?@??P???3t????Z6??-?k5????W'Y?1?v??T?v?0????_n???7y??)9?%?h) !????A??FL?????????.= +? ??V?i??By?????? fa??e??O@??1Fg~u?]j?oac?? +??b?jq??!?1?DzV8?>?q3???Q???r>Z????iK9??]?????b?????Y????W??eo:??U??*??R??I[??????P???V;/?*t??+T?~N???a?u?J-YE???!?gY??? #9??+~^?z?1^?9?? +h??5&?]?=?O?P??p?J>?>yVn??&??~?f??>k=$Kk?U?V??y??i?5TQ?????l?????`?B??????[???*??????\d???A???f?L? ?@&n???'????? OCV?*Kk?+?!?.???gH?????U'???e???u?.Rq??!m??+B????p??A??@D??BkB + ????!?? +???0???+??? +??7?_ ???9@??^?X? ?Es?IMB????!G#?m?j???5?V?SX???F*S/Xt9N???$??X?Ne??????? %Y?%SYw?1??@?Gy?dA??N ?-???:?7N??;?o>j??g ?y?H?&? ??#?P??Ka?in{?O???ej?;?k??????4WP??Q?8?B?k??t??6???E? +?l??M"??M +y at jm? ??H?w????5??N?A?c?M?Hr?a4+@??-?ce??;????-Y \????Tx??mw????'????b?\??N?E??????????yX???1?b??FoY??|4i?????!?? +6.?'n?????=`d??:??q??kN???gZ?#???KS5?Eh???? ??Tm*???c??0?szk?D???07?????d?Z?????$0x??? +-?"??????LI???f????F?????q?r?"??M???3?5k3??Y???`????#??f??3Yn?K?????(?\e?????Y?6?V>fA]?E?~{ +n? 5?8??G???L???\???D?P???s???e?JK????EC}??w;??t???uw???P??R?P??????????n?(????G +m?*!???V????e?????q???AwV??*; ?2v?G????????c?Q?X>7$??0KjX+??=?????t7???? ??????3cd??3 +??z??_q??h ~jF=?e}????m`Q??????FbF?? +?rm?"???3?? ?sOPhs?v????z??vC?!???l??~??ir?Z8????????5?41??b?kRC????@]???W??]???W? ???}?J8????????h ?SOG +a?h?N??? +[??s??(??????J^??0SV4?QB?*? +J+d???????a????0NiM?N(??I???q ?"*??2?? +n????vo?S!!t0???????X??LU?jJh?r"l??"r.[???????????'8?|:?W?x????P?!/n?[?0]y??o???#{?q?S?}??????M??Y?R!!?? ??y]?d???????????m????n??i?????Q?3m????? DR,?`??he?J?l@??;fB?????{_?3??q??S? ????j"????W??v?J%3?G>qzv "?#?O??5i@????z?-?R??+?c??*[0????}?>J]?? ??????/?tsOQw???v?g??Un?w????+y??????V??a?|?l3/? ???V2{??3??u'?C??.>??Y+4vw?$?0Z???????!???2g2-?.Z|??`.?Pu +?*?A????]hI???a??o???//r?3?L???x?~#????K?k?????N? *.=?h??Y????x??"?%?3m?????????? ?Z??????h3?b?LY?)? ??BW?H?????XV?mqz9;s???c;i?????'??{??&??t??????????D?j?H?K???1???thP?mI??:EJ??ct?Q??q?o0??D??ya?EX?REz??~???F%?*???|???l?D[??j4?/nA?c??nmFw???x??&?S???V??????a?n??L?(z?f???D?On?q?@2??????q\=?eL???L??????.N??^???z .?#"R? +\G?=???????BK???h?6F????N'S-?k#???'??F&Y\?(?]????;??^i?X?q/?{v??7????k}??lP5?X ?7???????"?????J??????s?)??G=????Km?C?|7??????@,: z?X????g>fx????rT9????L"?MFn?{ +????C??_? ; +i?G???UA??^?R????W???7R!???*?M???f??D0???????????J?tR?gd|??T~???,1?+?)MC????F?????X? +2]?}?b??'%R?Z9_f?X;0??n>????????3T???i|p?|#??}?M?Wxy?=e???B?[??P`???&???0?[y??[h??A??????zG???E]?D???? +?0&???r??t?F???? +c?oS?r???JdE|?5?oW??Y?6?? ?(??J3{;}7???????D2???????(WPC?{????=? +%????????HV8_?!ELZ???? +?8 ;? ?/?x??f?~?v@D??M??.????~??2km??'xR??F??Q?????|? ??V?}?u?~?kJZM???w\??V??H?3?l???F????????-??q?-??[_?????&???U????/??S?/????#?0??xS?a??Lm??? +l???GBz?[??y?>?;?????nm7??$#*|?>5h?:?$???H????i+N?*?YY?TB??N?0t,{????????` +&?}?3???????p? ????????5?>}J???I???y?{??w? E?#^0?~9??J@??'1T????b?Y_???rTA`?^6m?r?E?!{QB9S?F?_??EK??+c??2?9?k W!????R.?Wa?x?Rn:??TC?_????5x}B0??>???h(&k?o?M?[???????P??Eg???Wf???[????!????^a x? [?)?Y?s??7h?l?R? ??G?b6=&?n?dG??K{ +[c????????? ?K??"X? +????}=-??Av?N??????? ?????????? qqW???????_N??_^:???r$??H:???n???L???:e?c??T???????l:1????..??$?1ONl@?NN???????a?????`vS?????7??x???k?a!??Hi??N?????+??\?W????b?GX?[F'5??c>,??7????????)?A?V?D??Ju?2_Z???C????$?O?????K???w?V??%???4Qn??G????#?6???????????"t&u"?i???>FNHq??????????y{?nS03??c6>^z?]?? 08"??????qeEghe?Y$D???x+-y5???????Q?5???????????a????^*&Jw?6? +???$??0{5i????>lT?UgSx??`]??????w??B???7i??????6?((e??4??Gk+??~??}?;??D????5?o??}T}?x???|?2??]Ez??%!H??o%v?V#}C?n??RR?X>sP??l??i}u??"?;Xo? +r "O?@Y`??v?L?$?????1??F2??`IZ?????;??? +)q?@4b /2?Z???l??C????aFx???(R? 2??:?j???V ?S'm8????`)????%?????7?????????????v???f????6??}?????*???? .?|py?D?W;R]????$I??L??)U??;??????-??be\??@??E?r??7j]??M}?/??*;A??H???'?Y?_? +>?i(??4??f?7???8??t?h???I?`???v?>???(?(??_E??W0,??????I%?"?T")_??d4?3(??Bv??8&???R?NAP ??8??0?????#n^??2?????????yZ?XGh??W?????L??`?J?]5}?zq?X[$???N??af|???+4?????Q?!???&????a?????Cl?t???u z?m??yo????!O??8???{???T??_????/+X??lF??????V?<%?????uz\??:??/?>? +J?t! +?x??0?? +???????P?????????????m???vT????P? ??N????????????# ?l??!????&??G?p?????l5+?PY? +/?\ipRl?|???Q!5f ??~L^??y,??=?9??)9~?(?R???{?~e&f>m ?>??????IV?,??????:]??i?)O??6-{I?*?So???9j?{????=??? ?????????{%??????"??????$?DV?W??.???7?,&?y??A??fK??J?.a?"?E??,$yl)?????I?p6?????)Z"4j??~z ?? +D???]???????????6;??~???????C??: 6w?? +??c!?c((???z^??E4&d???G??L?h?f ??v?k??)0_?5|K?z??6@6d??x?]?ky???S ?)???vF+?}????/?}????Ia1 ??$???]]??Hv??)???/??:?;?~t?b??g??"6???b?U??b?k D????e????sH?&?mG?*?jVV?K?????k?l???C????^?:B??)n??P?2??????6?Y>??? ??b??????\?f?????????'4JZJ??0?7/??9?q??I??????]\?*??yX`?2?a??-??7???t???w\_]??/?\%???/ i????$?4??4 Author: shubhanm Date: 2013-08-03 12:10:40 +0200 (Sat, 03 Aug 2013) New Revision: 2713 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/glmi.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/nlsi.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Literature/Thumbs.db pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/lmi.R Log: Week 6-7 : Regression methods support for HC and HAc for lm(),glm, and nls. Status & Pending work : Still some errors for nls and vignette to be added.Also comparitive test with matlab equivalent function Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/glmi.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/glmi.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/glmi.R 2013-08-03 10:10:40 UTC (rev 2713) @@ -0,0 +1,92 @@ +glmi <- function (formula, family = gaussian, data,vcov = NULL, weights, subset, + na.action, start = NULL, etastart, mustart, offset, control = list(...), + model = TRUE, method = "glm.fit", x = FALSE, y = TRUE, contrasts = NULL, + ...) +{ + call <- match.call() + if (is.character(family)) + family <- get(family, mode = "function", envir = parent.frame()) + if (is.function(family)) + family <- family() + if (is.null(family$family)) { + print(family) + stop("'family' not recognized") + } + if (missing(data)) + data <- environment(formula) + mf <- match.call(expand.dots = FALSE) + m <- match(c("formula", "data", "subset", "weights", "na.action", + "etastart", "mustart", "offset"), names(mf), 0L) + mf <- mf[c(1L, m)] + mf$drop.unused.levels <- TRUE + mf[[1L]] <- as.name("model.frame") + mf <- eval(mf, parent.frame()) + if (identical(method, "model.frame")) + return(mf) + if (!is.character(method) && !is.function(method)) + stop("invalid 'method' argument") + if (identical(method, "glm.fit")) + control <- do.call("glm.control", control) + mt <- attr(mf, "terms") + Y <- model.response(mf, "any") + if (length(dim(Y)) == 1L) { + nm <- rownames(Y) + dim(Y) <- NULL + if (!is.null(nm)) + names(Y) <- nm + } + X <- if (!is.empty.model(mt)) + model.matrix(mt, mf, contrasts) + else matrix(, NROW(Y), 0L) + weights <- as.vector(model.weights(mf)) + if (!is.null(weights) && !is.numeric(weights)) + stop("'weights' must be a numeric vector") + if (!is.null(weights) && any(weights < 0)) + stop("negative weights not allowed") + offset <- as.vector(model.offset(mf)) + if (!is.null(offset)) { + if (length(offset) != NROW(Y)) + stop(gettextf("number of offsets is %d should equal %d (number of observations)", + length(offset), NROW(Y)), domain = NA) + } + mustart <- model.extract(mf, "mustart") + etastart <- model.extract(mf, "etastart") + fit <- eval(call(if (is.function(method)) "method" else method, + x = X, y = Y, weights = weights, start = start, etastart = etastart, + mustart = mustart, offset = offset, family = family, + control = control, intercept = attr(mt, "intercept") > + 0L)) + if (length(offset) && attr(mt, "intercept") > 0L) { + fit2 <- eval(call(if (is.function(method)) "method" else method, + x = X[, "(Intercept)", drop = FALSE], y = Y, weights = weights, + offset = offset, family = family, control = control, + intercept = TRUE)) + if (!fit2$converged) + warning("fitting to calculate the null deviance did not converge -- increase 'maxit'?") + fit$null.deviance <- fit2$deviance + } + if (model) + fit$model <- mf + fit$na.action <- attr(mf, "na.action") + if (x) + fit$x <- X + if (!y) + fit$y <- NULL + fit <- c(fit, list(call = call, formula = formula, terms = mt, + data = data, offset = offset, control = control, method = method, + contrasts = attr(X, "contrasts"), xlevels = .getXlevels(mt, + mf))) + class(fit) <- c(fit$class, c("glm", "lm")) + fit + if(is.null(vcov)) { + se <- vcov(fit) + } else { + if (is.function(vcov)) + se <- vcov(fit) + else + se <- vcov + } + fit = list(fit,vHaC = se) + fit + +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/nlsi.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/nlsi.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/nlsi.R 2013-08-03 10:10:40 UTC (rev 2713) @@ -0,0 +1,178 @@ +nlsi <- function (formula, data = parent.frame(),vcov = NULL, start, control = nls.control(), + algorithm = c("default", "plinear", "port"), trace = FALSE, + subset, weights, na.action, model = FALSE, lower = -Inf, + upper = Inf, ...) +{ + formula <- as.formula(formula) + algorithm <- match.arg(algorithm) + if (!is.list(data) && !is.environment(data)) + stop("'data' must be a list or an environment") + mf <- match.call() + varNames <- all.vars(formula) + if (length(formula) == 2L) { + formula[[3L]] <- formula[[2L]] + formula[[2L]] <- 0 + } + form2 <- formula + form2[[2L]] <- 0 + varNamesRHS <- all.vars(form2) + mWeights <- missing(weights) + pnames <- if (missing(start)) { + if (!is.null(attr(data, "parameters"))) { + names(attr(data, "parameters")) + } + else { + cll <- formula[[length(formula)]] + func <- get(as.character(cll[[1L]])) + if (!is.null(pn <- attr(func, "pnames"))) + as.character(as.list(match.call(func, call = cll))[-1L][pn]) + } + } + else names(start) + env <- environment(formula) + if (is.null(env)) + env <- parent.frame() + if (length(pnames)) + varNames <- varNames[is.na(match(varNames, pnames))] + lenVar <- function(var) tryCatch(length(eval(as.name(var), + data, env)), error = function(e) -1) + if (length(varNames)) { + n <- sapply(varNames, lenVar) + if (any(not.there <- n == -1)) { + nnn <- names(n[not.there]) + if (missing(start)) { + if (algorithm == "plinear") + stop("no starting values specified") + warning("No starting values specified for some parameters.\n", + "Initializing ", paste(sQuote(nnn), collapse = ", "), + " to '1.'.\n", "Consider specifying 'start' or using a selfStart model", + domain = NA) + start <- setNames(as.list(rep(1, length(nnn))), + nnn) + varNames <- varNames[i <- is.na(match(varNames, + nnn))] + n <- n[i] + } + else stop(gettextf("parameters without starting value in 'data': %s", + paste(nnn, collapse = ", ")), domain = NA) + } + } + else { + if (length(pnames) && any((np <- sapply(pnames, lenVar)) == + -1)) { + message(sprintf(ngettext(sum(np == -1), "fitting parameter %s without any variables", + "fitting parameters %s without any variables"), + paste(sQuote(pnames[np == -1]), collapse = ", ")), + domain = NA) + n <- integer() + } + else stop("no parameters to fit") + } + respLength <- length(eval(formula[[2L]], data, env)) + if (length(n) > 0L) { + varIndex <- n%%respLength == 0 + if (is.list(data) && diff(range(n[names(n) %in% names(data)])) > + 0) { + mf <- data + if (!missing(subset)) + warning("argument 'subset' will be ignored") + if (!missing(na.action)) + warning("argument 'na.action' will be ignored") + if (missing(start)) + start <- getInitial(formula, mf) + startEnv <- new.env(hash = FALSE, parent = environment(formula)) + for (i in names(start)) assign(i, start[[i]], envir = startEnv) + rhs <- eval(formula[[3L]], data, startEnv) + n <- NROW(rhs) + wts <- if (mWeights) + rep(1, n) + else eval(substitute(weights), data, environment(formula)) + } + else { + mf$formula <- as.formula(paste("~", paste(varNames[varIndex], + collapse = "+")), env = environment(formula)) + mf$start <- mf$control <- mf$algorithm <- mf$trace <- mf$model <- NULL + mf$lower <- mf$upper <- NULL + mf[[1L]] <- as.name("model.frame") + mf <- eval.parent(mf) + n <- nrow(mf) + mf <- as.list(mf) + wts <- if (!mWeights) + model.weights(mf) + else rep(1, n) + } + if (any(wts < 0 | is.na(wts))) + stop("missing or negative weights not allowed") + } + else { + varIndex <- logical() + mf <- list(0) + wts <- numeric() + } + if (missing(start)) + start <- getInitial(formula, mf) + for (var in varNames[!varIndex]) mf[[var]] <- eval(as.name(var), + data, env) + varNamesRHS <- varNamesRHS[varNamesRHS %in% varNames[varIndex]] + m <- switch(algorithm, plinear = nlsModel.plinear(formula, + mf, start, wts), port = nlsModel(formula, mf, start, + wts, upper), nlsModel(formula, mf, start, wts)) + ctrl <- nls.control() + if (!missing(control)) { + control <- as.list(control) + ctrl[names(control)] <- control + } + if (algorithm != "port") { + if (!missing(lower) || !missing(upper)) + warning("upper and lower bounds ignored unless algorithm = \"port\"") + convInfo <- .Call(C_nls_iter, m, ctrl, trace) + nls.out <- list(m = m, convInfo = convInfo, data = substitute(data), + call = match.call()) + } + else { + pfit <- nls_port_fit(m, start, lower, upper, control, + trace, give.v = TRUE) + iv <- pfit[["iv"]] + msg.nls <- port_msg(iv[1L]) + conv <- (iv[1L] %in% 3:6) + if (!conv) { + msg <- paste("Convergence failure:", msg.nls) + if (ctrl$warnOnly) + warning(msg) + else stop(msg) + } + v. <- port_get_named_v(pfit[["v"]]) + cInfo <- list(isConv = conv, finIter = iv[31L], finTol = v.[["NREDUC"]], + nEval = c(`function` = iv[6L], gradient = iv[30L]), + stopCode = iv[1L], stopMessage = msg.nls) + cl <- match.call() + cl$lower <- lower + cl$upper <- upper + nls.out <- list(m = m, data = substitute(data), call = cl, + convInfo = cInfo, convergence = as.integer(!conv), + message = msg.nls) + } + nls.out$call$algorithm <- algorithm + nls.out$call$control <- ctrl + nls.out$call$trace <- trace + nls.out$na.action <- attr(mf, "na.action") + nls.out$dataClasses <- attr(attr(mf, "terms"), "dataClasses")[varNamesRHS] + if (model) + nls.out$model <- mf + if (!mWeights) + nls.out$weights <- wts + nls.out$control <- control + class(nls.out) <- "nls" + nls.out + if(is.null(vcov)) { + se <- vcov(nls.out) + } else { + if (is.function(vcov)) + se <- vcov(nls.out) + else + se <- vcov + } + nls.out = list(nls.out,vHaC = se) + nls.out + +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Literature/Thumbs.db =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Literature/Thumbs.db ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/lmi.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/lmi.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/lmi.R 2013-08-03 10:10:40 UTC (rev 2713) @@ -0,0 +1,76 @@ +lmi <- function (formula, data,vcov = NULL, subset, weights, na.action, method = "qr", + model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, + contrasts = NULL, offset, ...) +{ + ret.x <- x + ret.y <- y + cl <- match.call() + mf <- match.call(expand.dots = FALSE) + m <- match(c("formula", "data", "subset", "weights", "na.action", + "offset"), names(mf), 0L) + mf <- mf[c(1L, m)] + mf$drop.unused.levels <- TRUE + mf[[1L]] <- as.name("model.frame") + mf <- eval(mf, parent.frame()) + if (method == "model.frame") + return(mf) + else if (method != "qr") + warning(gettextf("method = '%s' is not supported. Using 'qr'", + method), domain = NA) + mt <- attr(mf, "terms") + y <- model.response(mf, "numeric") + w <- as.vector(model.weights(mf)) + if (!is.null(w) && !is.numeric(w)) + stop("'weights' must be a numeric vector") + offset <- as.vector(model.offset(mf)) + if (!is.null(offset)) { + if (length(offset) != NROW(y)) + stop(gettextf("number of offsets is %d, should equal %d (number of observations)", + length(offset), NROW(y)), domain = NA) + } + if (is.empty.model(mt)) { + x <- NULL + z <- list(coefficients = if (is.matrix(y)) matrix(, 0, + 3) else numeric(), residuals = y, fitted.values = 0 * + y, weights = w, rank = 0L, df.residual = if (!is.null(w)) sum(w != + 0) else if (is.matrix(y)) nrow(y) else length(y)) + if (!is.null(offset)) { + z$fitted.values <- offset + z$residuals <- y - offset + } + } + else { + x <- model.matrix(mt, mf, contrasts) + z <- if (is.null(w)) + lm.fit(x, y, offset = offset, singular.ok = singular.ok, + ...) + else lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, + ...) + } + class(z) <- c(if (is.matrix(y)) "mlm", "lm") + z$na.action <- attr(mf, "na.action") + z$offset <- offset + z$contrasts <- attr(x, "contrasts") + z$xlevels <- .getXlevels(mt, mf) + z$call <- cl + z$terms <- mt + if (model) + z$model <- mf + if (ret.x) + z$x <- x + if (ret.y) + z$y <- y + if (!qr) + z$qr <- NULL + #z + if(is.null(vcov)) { + se <- vcov(z) + } else { + if (is.function(vcov)) + se <- vcov(z) + else + se <- vcov + } + z = list(z,vHaC = se) + z +} From noreply at r-forge.r-project.org Sat Aug 3 15:33:04 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 15:33:04 +0200 (CEST) Subject: [Returnanalytics-commits] r2714 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: Week2/Code Week2/Vignette Week3/Code Message-ID: <20130803133304.314ED1852AE@r-forge.r-project.org> Author: shubhanm Date: 2013-08-03 15:33:03 +0200 (Sat, 03 Aug 2013) New Revision: 2714 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/na.skip.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-Graph1.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-Graph10.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-concordance.tex pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/GLM_Returns.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/Orignal_Return.pdf Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/Return.GLM.R Log: Vignette : GLM Return Model Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/Return.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/Return.GLM.R 2013-08-03 10:10:40 UTC (rev 2713) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/Return.GLM.R 2013-08-03 13:33:03 UTC (rev 2714) @@ -41,13 +41,15 @@ # Ra return vector # q Lag Factors # Function: + library(tseries) + library(PerformanceAnalytics) R = checkData(Ra, method="xts") # Get dimensions and labels columns.a = ncol(R) columnnames.a = colnames(R) clean.GLM <- function(column.R,q=3) { - ma.coeff = as.numeric(arma(edhec[,1],0,q)$theta) + ma.coeff = as.numeric((arma(edhec[,1],order=c(0,q)))$coef[1:q]) column.glm = ma.coeff[q]*lag(column.R,q) return(column.glm) Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/na.skip.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/na.skip.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/na.skip.R 2013-08-03 13:33:03 UTC (rev 2714) @@ -0,0 +1,45 @@ +na.skip <- function (x, FUN=NULL, ...) # maybe add a trim capability? +{ # @author Brian Peterson + + # DESCRIPTION: + + # Time series data often contains NA's, either due to missing days, + # noncontiguous series, or merging multiple series, + # + # Some Calulcations, such as return calculations, require data that + # looks like a vector, and needs the output of na.omit + # + # It is often convenient to apply these vector-like functions, but + # you still need to keep track of the structure of the oridginal data. + + # Inputs + # x the time series to apply FUN too + # FUN function to apply + # ... any additonal parameters to FUN + + # Outputs: + # An xts time series that has the same index and NA's as the data + # passed in, after applying FUN + + nx <- na.omit(x) + fx <- FUN(nx, ... = ...) + if (is.vector(fx)) { + result <- .xts(fx, .index(x), .indexCLASS = indexClass(x)) + } + else { + result <- merge(fx, .xts(, .index(x))) + } + return(result) +} + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: na.skip.R 1855 2012-01-15 12:57:58Z braverock $ +# +############################################################################### \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-Graph1.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-Graph1.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-Graph10.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-Graph10.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-concordance.tex =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-concordance.tex (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn-concordance.tex 2013-08-03 13:33:03 UTC (rev 2714) @@ -0,0 +1,4 @@ +\Sconcordance{concordance:GLMReturn.tex:GLMReturn.Rnw:% +1 43 1 1 5 1 6 44 1 1 2 1 0 3 1 5 0 1 1 5 0 1 2 6 0 1 1 5 0 1 2 1 0 1 1 % +1 2 1 0 1 2 1 0 1 2 5 0 1 2 2 1 1 2 1 0 4 1 1 2 1 0 1 2 1 0 1 2 5 0 1 2 % +1 1} Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn.Rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn.Rnw 2013-08-03 13:33:03 UTC (rev 2714) @@ -0,0 +1,136 @@ +%% no need for \DeclareGraphicsExtensions{.pdf,.eps} + +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +%\usepackage{noweb} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps=FALSE} +%\VignetteIndexEntry{Performance Attribution from Bacon} +%\VignetteDepends{PerformanceAnalytics} +%\VignetteKeywords{returns, performance, risk, benchmark, portfolio} +%\VignettePackage{PerformanceAnalytics} + +%\documentclass[a4paper]{article} +%\usepackage[noae]{Sweave} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry} +%\usepackage{graphicx} +%\usepackage{graphicx, verbatim} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage{graphicx} + +\title{Gemantsky Lo Makarov Return Model} +\author{R Project for Statistical Computing} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\maketitle + + +\begin{abstract} +The returns to hedge funds and other alternative investments are often highly serially correlated. In this paper, we explore several sources of such serial correlation and show that the most likely explanation is illiquidity exposure and smoothed returns. We propose an econometric model of return smoothingand develop estimators for the smoothingprofile as well as a smoothing-adjusted Sharpe ratio.\end{abstract} + +<>= +library(PerformanceAnalytics) +data(edhec) +@ + +<>= +source("../Code/Return.GLM.R") +source("../Code/na.skip.R") + +@ + +\section{Methodology} +Given a sample of historical returns \((R_1,R_2, . . .,R_T)\),the method assumes the fund manager smooths returns in the following manner: + +To quantify the impact of all of these possible sources of serial correlation, denote by \(R_t\),the true economic return of a hedge fund in period t; and let \(R_t\) satisfy the following linear single-factor model: + +\begin{equation} + R_t = \\ {\mu} + {\beta}{{\delta}}_t+ \xi_t +\end{equation} + +Where $\xi_t, \sim N(0,1)$ +and Var[\(R_t\)] = $\sigma$\ \(^2\) + +True returns represent the flow of information that would determine the equilibrium value of the fund's securities in a frictionless market. However, true economic returns are not observed. Instead, \(R_t^0\) denotes the reported or observed return in period t; and let +%$Z = \sin(X)$. $\sqrt{X}$. + +%$\hat{\mu}$ = $\displaystyle\frac{22}{7}$ +%e^{2 \mu} = 1 +%\begin{equation} +%\left(\sum_{t=1}^{T} R_t/T\right) = \hat{\mu} \\ +%\end{equation} +\begin{equation} + R_t^0 = \theta _0R_{t} + \theta _1R_{t-1}+\theta _2R_{t-2} + \cdots + \theta _kR_{t-k}\\ +\end{equation} +\begin{equation} +\theta _j \epsilon [0,1] where : j = 0,1, \cdots , k \\ +\end{equation} + +and +%\left(\mu \right) = \sum_{t=1}^{T} \(Ri)/T\ \\ +\begin{equation} +\theta _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1 \\ +\end{equation} + +which is a weighted average of the fund's true returns over the most recent k + 1 +periods, including the current period. +\section{Smoothing Profile Estimates} + +Using the methods outlined above , the paper estimates the smoothing model +using maximumlikelihood procedure-programmed in Matlab using the Optimization Toolbox andreplicated in Stata usingits MA(k) estimation routine.Using Time seseries analysis and computational finance("tseries") library , we fit an it an ARMA model to a univariate time series by conditional least squares. For exact maximum likelihood estimation,arima0 from package stats can be used. + +\section{Usage} + +In this example we use edhec database, to compute true Hedge Fund Returns. + +<>= +library(PerformanceAnalytics) +data(edhec) +Returns = Return.GLM(edhec[,1]) +skewness(edhec[,1]) +skewness(Returns) +# Right Shift of Returns Ditribution for a negative skewed distribution +kurtosis(edhec[,1]) +kurtosis(Returns) +# Reduction in "peakedness" around the mean +layout(rbind(c(1, 2), c(3, 4))) + chart.Histogram(Returns, main = "Plain", methods = NULL) + chart.Histogram(Returns, main = "Density", breaks = 40, + methods = c("add.density", "add.normal")) + chart.Histogram(Returns, main = "Skew and Kurt", + methods = c("add.centered", "add.rug")) +chart.Histogram(Returns, main = "Risk Measures", + methods = c("add.risk")) +@ + +The above figure shows the behaviour of the distribution tending to a normal IID distribution.For comparitive purpose, one can observe the change in the charateristics of return as compared to the orignal. + +<>= +library(PerformanceAnalytics) +data(edhec) +Returns = Return.GLM(edhec[,1]) +layout(rbind(c(1, 2), c(3, 4))) + chart.Histogram(edhec[,1], main = "Plain", methods = NULL) + chart.Histogram(edhec[,1], main = "Density", breaks = 40, + methods = c("add.density", "add.normal")) + chart.Histogram(edhec[,1], main = "Skew and Kurt", + methods = c("add.centered", "add.rug")) +chart.Histogram(edhec[,1], main = "Risk Measures", + methods = c("add.risk")) +@ + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Vignette/GLMReturn.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/GLM_Returns.pdf =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/GLM_Returns.pdf (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/GLM_Returns.pdf 2013-08-03 13:33:03 UTC (rev 2714) @@ -0,0 +1,154 @@ +%PDF-1.4 +%????????\r +1 0 obj +<< +/CreationDate (D:20130803065214) +/ModDate (D:20130803065214) +/Title (R Graphics Output) +/Producer (R 3.0.1) +/Creator (R) +>> +endobj +2 0 obj +<< /Type /Catalog /Pages 3 0 R >> +endobj +7 0 obj +<< /Type /Page /Parent 3 0 R /Contents 8 0 R /Resources 4 0 R >> +endobj +8 0 obj +<< +/Length 8024 /Filter /FlateDecode +>> +stream +x???O?-7????S? +R????W??0?H? +$?Y \$ ?????????]??9o?;?a\?Q????v?\???n?????ny??o?o?????w?~???????~w??[mt????????????????[??i?{?[j???-?{?|???????????????O{???g???=w?L_????ey????/{????a/???sU2???+U"??}???}???X7????w????x??n?X????$/??w????:???[???????w???j?u_???????6??????UO[?W}?X??`[?4?5?d??vj?? ?\U??ug??|/*???j=5??n=?W?^??^??=?'U??qO?I??????RYw?'??J{?X????s???j=???N????????~^1?;?3?jA?????q???]}???????????^??????/|???W?]????????+???[^M???_?>?^?8???[?S??i??@/0vWX??+???L? +ew?}7?B??"??,?y?[?s?????s????>???c?5G???`?P? y??>? +?????a??4??t??I:???????]N;?_??????]??W \^l???./t????7???:J]??s??P??Q??zs>&?????V??jS? m??8???}?Kp_??t????1?>1??aB?R?,.????_????????bEO'??e???????y[????O??N?????/??YV ?K@???r ?:C??j?btAPYm????EA?+?b??Y????d? +?7?;c?q?Ae9?v +,?KW???V?E=??6^?? I?zf????Yc(??g???f??q??,?(??b\???z??????q?????5??w???1???? ?vG????6+/?????g????^a????`W(?+?i??? +k+v??y??xVewE(?Y?? ?(?_??C?h|?`?;;?????4?Q??s???_?X?d??p????B?]??|???B9\AlpW?+?????pE????WH????o???r?G}]????w??????4??~??"????j[???~???J?????????a?P??,?(o?8??U???Y5???SM????|s?W??.???2??b?6v?g[?,?X.3q???????'x??O?????O????\/??>??????6t?O?????&??+??@L??v)O(??O"???zl???>?tSi?%?????J^ +? ?????_?,??3????$???F9??$???ej???? ???????q=?-??\?4B??Z??H??????I????Z=W?q??dn??I?sEC??6?=???/??8????????*?*????PfG???g?????$s?,x?*h???18?1+?tS?????}?jX??5?? +???, +?1a???cy????'0'/^????W??.X?{??b??by?8 +?k??c????????,N.???]??K?d_????pC???????K})3 +?/????????_?p??i???#??????????#?u?s?????????x$??.??+??1jyp???:??/%????Gx???{?D_?|?$???Wp>?]8????u?s_E??9?o?????Jx^???p=Y?@???JYx?x??????hg?R?????u?oK?^0?Zy??O?}?QX??N?p????????5?&?5????I?? Y\??$???I???#?U????z?*????k?}w??!.dS???????? +R????4??IY??????fWnG??$? +?????? ?T?q?M?8?8P?T??UO? +f?????????z?????fW??%?L??w6?s?PV???+?G?V?W??bz4?U?????????h?:?O?k?5.n??Z?p(?CY??d?%??q9? ?]Y?:? ????u??#?IY??]???w?????]???Q????7e??CY???3`?(??2.7? +q????7 ?? +?s??$?4??????Wx??UoT??8~????y?d??'/????=??????8?1?o??7Q???''? ?;To??4? q??D???$??a?I\12??y ??L?yq??????GUy 0????0?^?z????EXq??a^?q?FR&?we?6?N(/?G.?GW?_???r???c?36=?1????k??g????????????}????~????d?Z?w(???7=?>C? ?&???y?W????>?4=??,z?CY???T/ +???7+??'???/?? ?y??U?KQ??/8?E?D{[\?Ne??m????^?????We??MY??$???_? ????4?%??-???????@??3?? +??B???')?e ?`?K??w?%??M?????.???Z?zI??q0?4c???([ ?9My??`|I??w /spE??7? ?O??b??2??w????C?WM?;??VIx??-?????q?? +[u?iy????s-?M/k?<|j???C'?A?S?3.?O?1!>B?Y?z?j3???!?xW??~?J?:^..????*Aoh{?<????j?????X??}??????0=????w eB?yXn"?F?U'v?u?? ?n????5???#~i???O? ??????????z?C_?_??,?C|XczV?aE?Q??6??}bk?m8???&?'?pJ??M???pF??M?In+R??u?G?V???e??? g%??n??i?????E?u????_?l g;?@???j_0?y?I?Hv????':?^??Ot4I?O????x(?g?&I#GM|?d??$??8?H}???h?,????&??M?(bJ V????$I?T@??Hw?4IQ??C????j? ?2Ma?V???????r?? +??Got +? ??$? ?XM?? +????*??)?DSAAv?PVY?Z?]?D??CYd}?$??TAP??/U??3I???%?UU??9UU??IZ?k??&IR?t?^5*??$ ?s$?^?a??HF?tI?p'??? /??*H??? +R??(??0$???UAV??|*?;??*??T?? +?D/? ?,{Q??W.?G?c?K??t?G?t +??P?*???j??-u?!q?? VP???*??S??B????TA??:???????????? 5??.?j??]?2???p??zG??????%!C?>?%C? {?* ??Z?????[p???2U?J??k?CR0D?a8d?C?^:$?B???h?@?g???^?? ????#?BW???1i??;f?$\??t+t??2???!????C-?????YHw?I?PG;?b???AP at MHz?v3 9}C?wuHje???!??*C?*??B??{T??THw8?? +???P??Cv9??+L??!??Z?F??5!I??4%P??8"Nx?)q/??>????? +???$NH??NI?, ?eJ??????)?h?S?h??,S?%]O?M???%K?????N? ]???? +?[gUYc??"aTAP?9?*H???Gg??7?*????? #,?0??l?~3????? ?????Ivcpr???J(???????`????+?w?????)G=? +???T??? ?mJ??Q5 Gr?L??????SU( ?(??p2B(wEI-p*&??!??"?]????O???\;c?WJ?C?Dd?g_[4??H??T???: +(?4??>?????!???%? +UR?P??$MP>-#??6?}??K!$}?q q?????v??H?f?ng$?Z?? +?&??d3?CHbl?"?!???SR?lX?K$C?,4???q??????b\ + )IB??JNc???H?a?!?d?I??????M??a8????f?n???X? + ??T\?tOVJr????$G0HO`0??r? +??&??E??o?,D???? J4??)?? ?9?U? ??D*?pJ ?O?? +??g???7CK?"c{?r???.d???N?]B???D]??,?,H?36??) ??A3?[?#???rS!}g??E?A?B??L*t?!12?B??????D??(v [;?/{?h?\?%P?I?@???Bh??Ybe???w6?=uet??T?MlE? +?P??3)b??_p??9+B(??|?=?C?0??C?d:????????L ??t??P? +I?!?e5?4??fB??Y?$?f?L???o?E???f????VM?a???9 ;?8?N?H?|?Lc%M?{sH&TU??$5=?? +??? ?????k7!???E2T!?? #?;m??B?tU????v???RBH??2?0???? +?? +??{T!C?'t??[1!?:????a?U?P?? ??8d?B??;$2T!C*?P?B]&?????Xl5H%??????3`'?aB???Bt!J??? Q?B?<?,?Y?A?D!Y???j?CB?P?i???Zj2s?HQ???$A<?~?s?P?Brn?cQh?k:qF!9??? aJ????^?A??z%??r????r?|?? T??DG???\??j?u?[jQ?q?z"E!9??1Q??2:0?(4?EM`???? 4?????????Lu=z?!???????x??+??Q(G!9??1?($+A?P=???A?F?*???- +?($?) +?????( +?s?0?(4?Us`2u?f???G/4? ??S@/4????)????q'?(??P?y?? Td?0?(T?? +? T??l8?E!9w?1Q???? DQH????ge????j?lq????0iT??i?????I?\?Y??P???*??m?:t?U?n??h?????#?????%om?[???E?s_????=ykO?*??m???V"o%?V???}???jc?Y??iV?????#???.%o??[??ko???7B???]?)??rxk9>?Z?[+}????j???[G??? ??????????`??z?????????#????????3??:un?VkJ????:?uFk;??b??w^?/25???h??[???3????Yg??hu?p^?:?uDkI?ZR?????Ek??????f???????S?V??P?[??2/c +?U?[????y+?????1?????y???U???Z4??Vk>?5?Z????=?qi)?a?0?);??g??0t???A?`???c??k?????+:{??a?;wh?.$5?W???{??endstream +endobj +3 0 obj +<< /Type /Pages /Kids [ 7 0 R ] /Count 1 /MediaBox [0 0 456 266] >> +endobj +4 0 obj +<< +/ProcSet [/PDF /Text] +/Font <> +/ExtGState << >> +/ColorSpace << /sRGB 5 0 R >> +>> +endobj +5 0 obj +[/ICCBased 6 0 R] +endobj +6 0 obj +<< /Alternate /DeviceRGB /N 3 /Length 2596 /Filter /FlateDecode >> +stream +x???wTS????7?P????khRH +?H?.*1 J??"6DTpDQ??2(???C??"??Q??D?qp?Id???y?????~k????g?}??????LX ? ?X??????g` ?l?p??B?F?|??l???? ??*????????Y"1P??????\?8=W?%?O???4M?0J?"Y?2V?s?,[|??e9?2?<?s??e???'??9???`???2?&c?tI?@?o??|N6(??.?sSdl-c?(2?-?y?H?_??/X??????Z.$??&\S???????M????07?#?1??Y?rf??Yym?";?8980m-m?(?]????v?^??D???W~? +??e????mi]?P????`/???u}q?|^R??,g+???\K?k)/????C_|?R????ax??8?t1C^7nfz?D????p? ?????u?$??/?ED??L L??[???B?@???????????????X?!@~(* {d+??} ?G???????????}W?L??$?cGD2?Q????Z4 E@?@??????A(?q`1???D ??????`'?u?4?6pt?c?48.??`?R0??)? +?@???R?t C???X??CP?%CBH@??R?????f?[?(t? +C??Qh?z#0 ??Z?l?`O8?????28.????p|??O???X +????:??0?FB?x$ !???i@?????H???[EE1PL? ??????V?6??QP??>?U?(j +?MFk?????t,:??.FW???????8???c?1?L&?????9???a??X?:??? +?r?bl1? +{{{;?}?#?tp?8_\?8??"?Ey?.,?X?????%?%G??1?-??9????????K??l?.??oo???/?O$?&?'=JvM??x??????{????=Vs\?x? ????N???>?u?????c?Kz???=s?/?o?l????|??????y???? ??^d]???p?s?~???:;???/;]??7|?????W????p???????Q?o?H?!?????V????sn??Ys}?????????~4??]? =>?=:?`??;c??'?e??~??!?a???D?#?G?&}'/?^?x?I??????+?\????w?x?20;5?\?????_??????e?t???W?f^??Qs?-?m???w3????+??~???????O?~????endstream +endobj +9 0 obj +<< +/Type /Encoding /BaseEncoding /WinAnsiEncoding +/Differences [ 45/minus ] +>> +endobj +10 0 obj +<< /Type /Font /Subtype /Type1 /Name /F2 /BaseFont /Helvetica +/Encoding 9 0 R >> +endobj +11 0 obj +<< /Type /Font /Subtype /Type1 /Name /F3 /BaseFont /Helvetica-Bold +/Encoding 9 0 R >> +endobj +xref +0 12 +0000000000 65535 f +0000000021 00000 n +0000000163 00000 n +0000008388 00000 n +0000008471 00000 n +0000008594 00000 n +0000008627 00000 n +0000000212 00000 n +0000000292 00000 n +0000011322 00000 n +0000011416 00000 n +0000011513 00000 n +trailer +<< /Size 12 /Info 1 0 R /Root 2 0 R >> +startxref +11615 +%%EOF Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/Orignal_Return.pdf =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/Orignal_Return.pdf (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/Orignal_Return.pdf 2013-08-03 13:33:03 UTC (rev 2714) @@ -0,0 +1,154 @@ +%PDF-1.4 +%????????\r +1 0 obj +<< +/CreationDate (D:20130803065243) +/ModDate (D:20130803065243) +/Title (R Graphics Output) +/Producer (R 3.0.1) +/Creator (R) +>> +endobj +2 0 obj +<< /Type /Catalog /Pages 3 0 R >> +endobj +7 0 obj +<< /Type /Page /Parent 3 0 R /Contents 8 0 R /Resources 4 0 R >> +endobj +8 0 obj +<< +/Length 8214 /Filter /FlateDecode +>> +stream +x??]M?%G?????${???;skf??[3 ???F?p??~2?????w???i?p????:UY????????p???/o~??W?????????q7????~????\???[?m????????????[???\o??{l???%?!?{?F:??4?|?b????A_|u?????_????,?{?????;?uY?7?:/?????W?}??????7???????J?????8?=?#?y?\??????_??????r????????????2??????6[??{?z3????_??????c??5?i?>9??6??????K????5O????o~????~?&?~>?????7??7%?[?????-?{?.U??xS??? ?w????9??????????D?%i????z???G?^ss?^?3?K?{??xO?W???5?1?<%?}??x8w=<4?????|??! +5???|????5q?o?5?i?3g?8?aE?]?$???????#n????^??>D??????????{9???g[L???=?????????8?Rpr>?M???&?? ????w?I?G??5????x??y'??q???$??l????????>?i?K???I??8GA???eR}p??|???{|?d,k? 0?M???NF????uy?d,?t??NF:?????x?dlR^????zu2 ???a?|u2,??N?!S?:??W'??????Lb?????b????????Xd??????&]??&k????sf??L?nXj???Pw??? ?[?4%B????%?9{|?x??l?^???t????????g?#??yQ$?????????_B???????g?;?yP?g????e?t?4,?1h?????X??????????~???]?X????ZG?#VZG?Gx??>>?{???????????R?j4?????e??????4?8?q;? _1_?'??!f???!?M?????~???1=?????ws?~?p?be??QN??s???g??ES????????????x?j?? u;J +:??????I????3?Wm??h???)?????M?P??????g{a??A????y???x>o???L? ?Y+??xe?????eX^0????l?????????C?M2R?03?;u???$?%?48IC??#\L?G?M?n?1x??fw????p??e?9???? ??i?-????*??f?V??x~??q???4?,??u?8?@28?$??l?????????o???xF?;{???=??/??M??t?u??? 2???O{????$;??q???> ???G?f???~?(?P?"7.i???{??? ?}?)F?????????W??????o??9?????K5????V????#h{????:*?W?{?H +y?t?wA*???:?7?CU??y????N?4t?????????????^??.:??5 +?k??^?9??i?????????Q?_???Lv??Q^?g?{?????{}F??M??t??y? ???}?i??? ??X??????t?m?7 +|_????y?{?7:?i???d}q???/??=?????????????P???~??????T=????a?=????Yv?????3?z??W????k=??????G???z?????#~???ari????#v???????? ?c?7?????Y??z?????^/]????}???e???^?bpq~^??Wv?_????^u~?`??.z?????^3y-??^?_??Iu?O?z??7?]????_??ZL?[=??C????Gpz???8^???{'?z???dv????^6Ib`????$??W.z???{?z??&????b8??k?????_??E?{?????s???!w8????No??..????????x???h6?{?t?K&? ???Y +?z???}*qc???7?J\?????B????^?_f??k?~???E?/?`c?7.z??aax=????.N?????????z??&n?y?%????E?x?K>????????]^F??????>???O?K?s??c?LL=??3o??Y????mb????????/y???'?^+???O??,=????_?>J?^c???ecr??a???????:S??R!??h????{$??E? S?;1?0?F??_??%??{/?G`0?N?`??V???$???=G???_?h?????m????8N??-=????? +{,?uM?S?(`~? g?/l?????L?\M???????W??????AO?+)???p??=???}&?#c????W?????7???Eb??? 8?_8??????mM\W?uC????a?_??????s?????3??N??5?s???? +N????X??a??cX;?????z17???zm??????q?P????x??????u???Lp?;??????:dC?*?zX?;?????W?YG???#????Hp?????k?k?5????J ??n?W??jD?]??)??/??7?/XgM??+?;? ? ?c??????i??y??}&????AL=?k????^? ??????uRY7F??p???b%???"???'?U?z?S?)4???F0???????PS?????uk??????;?g?;?+???????B???DL=?{??_???a_?^?n??zx?1?}Xm???}???????yaALK?b?j"????????`??*k?~? ?^Y??J#??[,K?d?a?~iJuJ~W?36w? ?2 5??'???????|?????????w????6??g(??????/?]????I?????'e4????|?sU??a????&{?=????yL?&??Z(V|xj?R(y~Li??cJ ?R?w~L?I|H??? ???O8?5??Cq?N??NW?s?s6?Y??2??/?U?O8]a||M?8???a????V8???aI???*?7?{9?cX?l??8???????f???1?m6?Y??f?2????o ??O8????(?}|???A5???55??|????h^4P????g?g?]e????r[???Q%?,???????c>???2fw???]f1w??v +?f?????!n??f??? d?,N?f???h???].???.M?f?}lY???????5s,?3??Va?3:TdFp +?2? +? +?t%? *?b??@T?EV?8??e?U?d? ??/??q????@??? +??`?A ????h??? + +? +D?l$?9%?F'????TL?A? B +?oZU?W,0[!?$?n +Xt? +?>?.!i???P",?A +e?e BD?R?"S1?Vu uB +-?????k?I?2?@??? +?@-b\?q??ZkY#R?E("C/P?"???_!?!??A8?E??F8(??j??1?K?!??m?&!? ?>???W???"B?J?V???E.??????M?B??? ?>???P?]e +?AX?s$?P?m w??(B? ?PI?C???+?? ?P??]??&?j'???-B???!?p?.1K???!??:?0R?.?Un?X?/?- l??Y?R???!j%I?B????5?uG?#??/4?~??p?pM|G1P(-v?@M| 1R(wB??#3? +???1QHWJ??? Bm?v?&Y +P]????#5?5dcwV8E(??G]??0?`-+??W?e?1 ??%I??????B? ?x/?o?F??N!??J??i????B#?V???B?F}e?A???_??q?wH?Vn(B??+PB?.?????H6B??'P???.????Im`??O?"T??-P??{S?????e?D??S?q?e@?c??)S]%?????!? +&m??d?vU8E?{"?P??J???r?@b!?@???Q ?*Z?Qc_vk?}Aa?JZ R;{l?=??s?&??\?(?ggM?a?r?@??&??????A!]?!]?(?!w0 +???4??4??Tn(B???u??,?(B?V?qne(Q"w&????>B??l??zl?B????(}#g +???L???,-??\(4????? :L?Y?7??8C?????P;8?f??hmN?.???8?I?A ????q??@aJ??B??Y?????! +?@!Xo??]?]??B?? ?6??n??W.a B +?]?p?=?H!W?????Mj????2????? ??????e?5?T???g???{>???Z{??Z???2Q??LW?Hc??Uk4*????{??Z;?0?????T????????=wK? ??^?8c??Mk3?W?t???VfT ?nR?Mo? +?W???z#?-R-????^?l?????(?T???????????[??YS?o{n-So?e +?|?Viu ???P??P(???tC??6??al_??????6-??~l??????|??M+0????c???&TWf?u +?{7???o?B~{??nZ3?}?6( ? +??'L??&?A!??]?cv$??????Q?l??g??##kt????zd???GF???z?,pO?C?7z?ZR?B?3???Bh??)?6!?Pt?^(?X? +?z????^)?[r????J!??^W???l+??????E??3???;????'n????Bx4??l??{.g6? +a???I?P??????`???2?0?G??#0????B??F??M?,g^eD +!?2#kL?#-????3?#??M?gdn*??g?H~?g?y??5;??ZD )?{?Kh??t??v?k?%???#e?%$??O?yaZ??[????8Z?w?P???.L?????] t|O??????dx,4l?.+?????? +???{y????W2???8?\??????????K?????]??_???????/?_?????{???>?W???????y??_??????8??W???????????lx?:>F??x?]??k?w?????????????2.? ??>W?{???/??%??o???^x?????[????%?h?????8>? o??????$???????????w???Y'?????$t| +?O??|????_??K??5:??O?[?????}????????|??G? 1|n????|??o?????^_??7qN>q??+x?????%_>????U??/??gg??J>?????W??X???????/???]x????6???~T????5'?X?? +????N??-;????>?:?????? +_??@x?[#O????-??]??? +?d??Wn???????o???????Zg?JA?????????_????o????i?????O??|}????m?????nR}??4??Z2~???????P?????;i?;?s??????-?je??c??H????#??F?%?o??i??:??x???-??f????>?k?????%~?1???=?&>???<|?????r?????f=?B?l.??????>p???]~????-n????&[;6x??P??i%?^nA_??j?]????????????g_??;??????;p??_^?G??????????3endstream +endobj +3 0 obj +<< /Type /Pages /Kids [ 7 0 R ] /Count 1 /MediaBox [0 0 456 266] >> +endobj +4 0 obj +<< +/ProcSet [/PDF /Text] +/Font <> +/ExtGState << >> +/ColorSpace << /sRGB 5 0 R >> +>> +endobj +5 0 obj +[/ICCBased 6 0 R] +endobj +6 0 obj +<< /Alternate /DeviceRGB /N 3 /Length 2596 /Filter /FlateDecode >> +stream +x???wTS????7?P????khRH +?H?.*1 J??"6DTpDQ??2(???C??"??Q??D?qp?Id???y?????~k????g?}??????LX ? ?X??????g` ?l?p??B?F?|??l???? ??*????????Y"1P??????\?8=W?%?O???4M?0J?"Y?2V?s?,[|??e9?2?<?s??e???'??9???`???2?&c?tI?@?o??|N6(??.?sSdl-c?(2?-?y?H?_??/X??????Z.$??&\S???????M????07?#?1??Y?rf??Yym?";?8980m-m?(?]????v?^??D???W~? +??e????mi]?P????`/???u}q?|^R??,g+???\K?k)/????C_|?R????ax??8?t1C^7nfz?D????p? ?????u?$??/?ED??L L??[???B?@???????????????X?!@~(* {d+??} ?G???????????}W?L??$?cGD2?Q????Z4 E@?@??????A(?q`1???D ??????`'?u?4?6pt?c?48.??`?R0??)? +?@???R?t C???X??CP?%CBH@??R?????f?[?(t? +C??Qh?z#0 ??Z?l?`O8?????28.????p|??O???X +????:??0?FB?x$ !???i@?????H???[EE1PL? ??????V?6??QP??>?U?(j +?MFk?????t,:??.FW???????8???c?1?L&?????9???a??X?:??? +?r?bl1? [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2714 From noreply at r-forge.r-project.org Sat Aug 3 16:00:31 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 3 Aug 2013 16:00:31 +0200 (CEST) Subject: [Returnanalytics-commits] r2715 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130803140031.85F27185689@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-03 16:00:31 +0200 (Sat, 03 Aug 2013) New Revision: 2715 Added: pkg/PortfolioAnalytics/R/applyFUN.R pkg/PortfolioAnalytics/man/applyFUN.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd Modified: pkg/PortfolioAnalytics/DESCRIPTION pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.ROI.R Log: Adding an applyFUN function and modifying chart.Scatter.ROI to use applyFUN Modified: pkg/PortfolioAnalytics/DESCRIPTION =================================================================== --- pkg/PortfolioAnalytics/DESCRIPTION 2013-08-03 13:33:03 UTC (rev 2714) +++ pkg/PortfolioAnalytics/DESCRIPTION 2013-08-03 14:00:31 UTC (rev 2715) @@ -47,3 +47,4 @@ 'constraint_fn_map.R' 'optFUN.R' 'charts.ROI.R' + 'applyFUN.R' Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-03 13:33:03 UTC (rev 2714) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-03 14:00:31 UTC (rev 2715) @@ -2,6 +2,7 @@ export(add.objective_v1) export(add.objective_v2) export(add.objective) +export(applyFUN) export(box_constraint) export(CCCgarch.MM) export(chart.Scatter.DE) @@ -51,6 +52,7 @@ export(optimize.portfolio) export(plot.optimize.portfolio.DEoptim) export(plot.optimize.portfolio.random) +export(plot.optimize.portfolio.ROI) export(plot.optimize.portfolio) export(portfolio_risk_objective) export(portfolio.spec) Added: pkg/PortfolioAnalytics/R/applyFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/applyFUN.R (rev 0) +++ pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-03 14:00:31 UTC (rev 2715) @@ -0,0 +1,69 @@ +#' Apply a risk or return function to a set of weights +#' +#' This function is used to calculate risk or return metrics given a matrix of +#' weights and is primarily used as a convenience function used in chart.Scatter functions +#' +#' @param R +#' @param weights a matrix of weights generated from random_portfolios or \code{optimize.portfolio} +#' @param FUN +#' @param ... any passthrough arguments to FUN +#' @author Ross Bennett +#' @export +applyFUN <- function(R, weights, FUN="mean", ...){ + nargs <- list(...) + + moments <- function(R){ + momentargs <- list() + momentargs$mu <- matrix(as.vector(apply(R, 2, "mean")), ncol = 1) + momentargs$sigma <- cov(R) + momentargs$m3 <- PerformanceAnalytics:::M3.MM(R) + momentargs$m4 <- PerformanceAnalytics:::M4.MM(R) + return(momentargs) + } + + nargs <- c(nargs, moments(R)) + nargs$R <- R + + # match the FUN arg to a risk or return function + switch(FUN, + mean = { + fun = match.fun(mean) + }, + sd =, + StdDev = { + fun = match.fun(StdDev) + }, + mVaR =, + VaR = { + fun = match.fun(VaR) + if(is.null(nargs$portfolio_method)) nargs$portfolio_method='single' + if(is.null(nargs$invert)) nargs$invert = FALSE + }, + es =, + mES =, + CVaR =, + cVaR =, + ES = { + fun = match.fun(ES) + if(is.null(nargs$portfolio_method)) nargs$portfolio_method='single' + if(is.null(nargs$invert)) nargs$invert = FALSE + }, +{ # see 'S Programming p. 67 for this matching + fun <- try(match.fun(FUN)) +} + ) # end switch block + + out <- rep(0, nrow(weights)) + .formals <- formals(fun) + onames <- names(.formals) + for(i in 1:nrow(weights)){ + nargs$weights <- as.numeric(weights[i,]) + nargs$x <- R %*% as.numeric(weights[i,]) + dargs <- nargs + pm <- pmatch(names(dargs), onames, nomatch = 0L) + names(dargs[pm > 0L]) <- onames[pm] + .formals[pm] <- dargs[pm > 0L] + out[i] <- try(do.call(fun, .formals)) + } + return(out) +} \ No newline at end of file Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-03 13:33:03 UTC (rev 2714) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-03 14:00:31 UTC (rev 2715) @@ -108,83 +108,12 @@ # Get the optimal weights from the output of optimize.portfolio wts <- ROI$weights - nargs <- list(...) - if(length(nargs)==0) nargs <- NULL - if (length('...')==0 | is.null('...')) { - # rm('...') - nargs <- NULL - } - - # Allow the user to pass in a different portfolio object used in set.portfolio.moments - if(is.null(portfolio)) portfolio <- ROI$portfolio - - nargs <- set.portfolio.moments(R=R, portfolio=portfolio, momentargs=nargs) - - nargs$R <- R - nargs$weights <- wts - + # cbind the optimal weights and random portfolio weights rp <- rbind(wts, rp) - # Match the return.col arg to a function - switch(return.col, - mean =, - median = { - returnFUN = match.fun(return.col) - nargs$x <- ( R %*% wts ) #do the multivariate mean/median with Kroneker product - } - ) + returnpoints <- applyFUN(R=R, weights=rp, FUN=return.col, ...=...) + riskpoints <- applyFUN(R=R, weights=rp, FUN=risk.col, ...=...) - if(is.function(returnFUN)){ - returnpoints <- rep(0, nrow(rp)) - .formals <- formals(returnFUN) - onames <- names(.formals) - for(i in 1:nrow(rp)){ - nargs$weights <- rp[i,] - nargs$x <- R %*% rp[i,] - dargs <- nargs - pm <- pmatch(names(dargs), onames, nomatch = 0L) - names(dargs[pm > 0L]) <- onames[pm] - .formals[pm] <- dargs[pm > 0L] - returnpoints[i] <- do.call(returnFUN, .formals) - } - } - - # match the risk.col arg to a function - switch(risk.col, - sd =, - StdDev = { - riskFUN = match.fun(StdDev) - }, - mVaR =, - VaR = { - riskFUN = match.fun(VaR) - if(is.null(nargs$portfolio_method)) nargs$portfolio_method='single' - if(is.null(nargs$invert)) nargs$invert = FALSE - }, - es =, - mES =, - CVaR =, - cVaR =, - ES = { - riskFUN = match.fun(ES) - if(is.null(nargs$portfolio_method)) nargs$portfolio_method='single' - if(is.null(nargs$invert)) nargs$invert = FALSE - } - ) - - if(is.function(riskFUN)){ - riskpoints <- rep(0, nrow(rp)) - .formals <- formals(riskFUN) - onames <- names(.formals) - for(i in 1:nrow(rp)){ - nargs$weights <- rp[i,] - dargs <- nargs - pm <- pmatch(names(dargs), onames, nomatch = 0L) - names(dargs[pm > 0L]) <- onames[pm] - .formals[pm] <- dargs[pm > 0L] - riskpoints[i] <- do.call(riskFUN, .formals) - } - } plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, main=main) points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal axis(1, cex.axis = cex.axis, col = element.color) Added: pkg/PortfolioAnalytics/man/applyFUN.Rd =================================================================== --- pkg/PortfolioAnalytics/man/applyFUN.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/applyFUN.Rd 2013-08-03 14:00:31 UTC (rev 2715) @@ -0,0 +1,25 @@ +\name{applyFUN} +\alias{applyFUN} +\title{Apply a risk or return function to a set of weights} +\usage{ + applyFUN(R, weights, FUN = "mean", ...) +} +\arguments{ + \item{R}{} + + \item{weights}{a matrix of weights generated from + random_portfolios or \code{optimize.portfolio}} + + \item{FUN}{} + + \item{...}{any passthrough arguments to FUN} +} +\description{ + This function is used to calculate risk or return metrics + given a matrix of weights and is primarily used as a + convenience function used in chart.Scatter functions +} +\author{ + Ross Bennett +} + Added: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd 2013-08-03 14:00:31 UTC (rev 2715) @@ -0,0 +1,61 @@ +\name{plot.optimize.portfolio.ROI} +\alias{plot.optimize.portfolio.ROI} +\title{scatter and weights chart for portfolios} +\usage{ + plot.optimize.portfolio.ROI(ROI, R, rp = NULL, + portfolio = NULL, risk.col = "StdDev", + return.col = "mean", element.color = "darkgray", + neighbors = NULL, main = "ROI.Portfolios", ...) +} +\arguments{ + \item{ROI}{object created by + \code{\link{optimize.portfolio}}} + + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns, used to recalulate the + risk and return metric} + + \item{rp}{set of weights generated by + \code{\link{random_portfolio}}} + + \item{portfolio}{pass in a different portfolio object + used in set.portfolio.moments} + + \item{risk.col}{string matching the objective of a 'risk' + objective, on horizontal axis} + + \item{return.col}{string matching the objective of a + 'return' objective, on vertical axis} + + \item{...}{any other passthru parameters} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot scatter + points} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot} + + \item{main}{an overall title for the plot: see + \code{\link{title}}} +} +\description{ + The ROI optimizers do not store the portfolio weights + like DEoptim or random portfolios so we will generate + random portfolios for the scatter plot. +} +\details{ + \code{return.col} must be the name of a function used to + compute the return metric on the random portfolio weights + \code{risk.col} must be the name of a function used to + compute the risk metric on the random portfolio weights +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + From noreply at r-forge.r-project.org Sun Aug 4 18:00:36 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 4 Aug 2013 18:00:36 +0200 (CEST) Subject: [Returnanalytics-commits] r2716 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130804160036.35668184471@r-forge.r-project.org> Author: pulkit Date: 2013-08-04 18:00:35 +0200 (Sun, 04 Aug 2013) New Revision: 2716 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R Log: Added files for drawdown beta Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-03 14:00:31 UTC (rev 2715) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-04 16:00:35 UTC (rev 2716) @@ -13,12 +13,12 @@ #' #' \deqn{Q = \left\{ \left\{ q_st\right\}_{s,t=1}^{S,T} | \sum_{s = 1}^S \sum_{t = 1}^T{p_s}{q_st} = 1, 0{\leq}q_st{\leq}\frac{1}{(1-\alpha)T}, s = 1....S, t = 1.....T \right\}} #' -#' For \eqn{\alpha = 1} , \eqn{D_\alpha(w)} is defined by (3) with the constraint \eqn{0{\leq}q_st{\leq}\frac{1}{(1-\alpha)T}}, -#' in Q replaced by \eqn{q_st{\geq}0} +#' For \eqn{\alpha = 1} , \eqn{D_\alpha(w)} is defined by (3) with the constraint +#' \eqn{0{\leq}q_st{\leq}\frac{1}{(1-\alpha)T}}, in Q replaced by \eqn{q_st{\geq}0} #' -#' As in the case of a single sample-path, the CDaR definition includes two special cases : (i) for \eqn{\alpha = 1}, -#' \eqn{D_1(w)} is the maximum drawdown, also called drawdown from peak-to-valley, and (ii) for \eqn{\alpha} = 0, \eqn{D_\alpha(w)} -#' is the average drawdown +#' As in the case of a single sample-path, the CDaR definition includes two special cases : +#' (i) for \eqn{\alpha = 1},\eqn{D_1(w)} is the maximum drawdown, also called drawdown from +#' peak-to-valley, and (ii) for \eqn{\alpha} = 0, \eqn{D_\alpha(w)} is the average drawdown #' #'@param R an xts, vector, matrix,data frame, timeSeries or zoo object of multiple sample path returns #'@param ps the probability for each sample path @@ -41,26 +41,26 @@ R = na.omit(R) nr = nrow(R) # checking if nr*p is an integer - drawdowns = -as.matrix(Drawdowns(R)) + + multicdar<-function(x){ if((p*nr) %% 1 == 0){ - drawdowns = as.matrix(Drawdowns(R)) + drawdowns = as.matrix(Drawdowns(x)) drawdowns = drawdowns(order(drawdowns),decreasing = TRUE) # average of the drawdowns greater the (1-alpha).100% largest drawdowns - result = (1/((1-p)*nr(R)))*sum(drawdowns[((1-p)*nr):nr]) + result = (1/((1-p)*nr(x)))*sum(drawdowns[((1-p)*nr):nr]) } else{ # if nr*p is not an integer #f.obj = c(rep(0,nr),rep((1/(1-p))*(1/nr),nr),1) + drawdowns = -as.matrix(Drawdowns(x)) - # The objective function is defined - + f.obj = NULL for(i in 1:sample){ for(j in 1:nr){ - f.obj = c(f.obj,ps[i]*drawdowns[i,j]) + f.obj = c(f.obj,ps[i]*drawdowns[j,i]) } } - print(f.obj) - + f.con = NULL # constraint 1: ps.qst = 1 for(i in 1:sample){ for(j in 1:nr){ @@ -68,17 +68,17 @@ } } f.con = matrix(f.con,nrow =1) - f.dir = "=" + f.dir = "==" f.rhs = 1 # constraint 2 : qst >= 0 for(i in 1:sample){ for(j in 1:nr){ r<-rep(0,sample*nr) - r[(i-1)*s+j] = 1 + r[(i-1)*sample+j] = 1 f.con = rbind(f.con,r) } } - f.dir = c(f.dir,">=",sample*nr) + f.dir = c(f.dir,rep(">=",sample*nr)) f.rhs = c(f.rhs,rep(0,sample*nr)) @@ -86,11 +86,11 @@ for(i in 1:sample){ for(j in 1:nr){ r<-rep(0,sample*nr) - r[(i-1)*s+j] = 1 + r[(i-1)*sample+j] = 1 f.con = rbind(f.con,r) } } - f.dir = c(f.dir,"<=",sample*nr) + f.dir = c(f.dir,rep("<=",sample*nr)) f.rhs = c(f.rhs,rep(1/(1-p)*nr,sample*nr)) # constraint 1: @@ -114,20 +114,22 @@ # f.con = rbind(f.con,cbind(diag(nr),matrix(0,nr,nr),1)) # f.dir = c(rep(">=",nr)) # f.rhs = c(f.rhs,rep(0,nr)) - val = lp("max",f.obj,f.con,f.dir,f.rhs) result = val$objval } - - R = checkData(R, method = "matrix") +} + R = checkData(R, method = "matrix") result = matrix(nrow = 1, ncol = ncol(R)) - for (i in 1:ncol(R)) { - result[i] <- CDD(R[, i, drop = FALSE], p = p, - geometric = geometric, invert = invert, ... = ...) + for (i in 1:(ncol(R)/sample)) { + ret<-NULL + for(j in 1:sample){ + ret<-cbind(ret,R[,(j-1)*ncol(R)/sample+i]) + } + result[i] <- multicdar(ret) } dim(result) = c(1, NCOL(R)) colnames(result) = colnames(R) rownames(result) = paste("Conditional Drawdown ", p * 100, "%", sep = "") return(result) -} \ No newline at end of file +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R 2013-08-03 14:00:31 UTC (rev 2715) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R 2013-08-04 16:00:35 UTC (rev 2716) @@ -12,36 +12,36 @@ result = -(1/((1-p)*nr))*sum(drawdowns[((p)*nr):nr]) } else{ - - result = ES(Drawdowns(R),p=p,method="historical") + # CDaR using the CVaR function + # result = ES(Drawdowns(R),p=p,method="historical") # if nr*p is not an integer -# f.obj = c(rep(0,nr),rep((1/(1-p))*(1/nr),nr),1) + f.obj = c(rep(0,nr),rep(((1/(1-p))*(1/nr)),nr),1) # k varies from 1:nr # constraint : zk -uk +y >= 0 -# f.con = cbind(-diag(nr),diag(nr),1) -# f.dir = c(rep(">=",nr)) -# f.rhs = c(rep(0,nr)) + f.con = cbind(-diag(nr),diag(nr),1) + f.dir = c(rep(">=",nr)) + f.rhs = c(rep(0,nr)) # constraint : uk -uk-1 >= -rk -# ut = diag(nr) -# ut[-1,-nr] = ut[-1,-nr] - diag(nr - 1) -# f.con = rbind(f.con,cbind(ut,matrix(0,nr,nr),1)) -# f.dir = c(rep(">=",nr)) -# f.rhs = c(f.rhs,-R) + ut = diag(nr) + ut[-1,-nr] = ut[-1,-nr] - diag(nr - 1) + f.con = rbind(f.con,cbind(ut,matrix(0,nr,nr),0)) + f.dir = c(rep(">=",nr)) + f.rhs = c(f.rhs,-R) # constraint : zk >= 0 -# f.con = rbind(f.con,cbind(matrix(0,nr,nr),diag(nr),1)) -# f.dir = c(rep(">=",nr)) -# f.rhs = c(f.rhs,rep(0,nr)) + f.con = rbind(f.con,cbind(matrix(0,nr,nr),diag(nr),0)) + f.dir = c(rep(">=",nr)) + f.rhs = c(f.rhs,rep(0,nr)) # constraint : uk >= 0 -# f.con = rbind(f.con,cbind(diag(nr),matrix(0,nr,nr),1)) -# f.dir = c(rep(">=",nr)) -# f.rhs = c(f.rhs,rep(0,nr)) + f.con = rbind(f.con,cbind(diag(nr),matrix(0,nr,nr),0)) + f.dir = c(rep(">=",nr)) + f.rhs = c(f.rhs,rep(0,nr)) -# val = lp("min",f.obj,f.con,f.dir,f.rhs) -# result = val$objval + val = lp("min",f.obj,f.con,f.dir,f.rhs) + result = -val$objval } if (invert) result <- -result @@ -58,7 +58,7 @@ } dim(result) = c(1, NCOL(R)) colnames(result) = colnames(R) - rownames(result) = paste("Conditional Drawdown ", p*100, "%", sep = "") + rownames(result) = paste("Conditional Drawdown ", round(p,3)*100, "%", sep = "") } else { portret <- Return.portfolio(R, weights = weights, Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-04 16:00:35 UTC (rev 2716) @@ -0,0 +1,83 @@ +#'@title +#'Drawdown Beta for single path +#' +#'@description +#'The drawdown beta is formulated as follows +#' +#'\deqn{\beta_DD = \frac{{\sum_{t=1}^T}{q_t^\asterisk}{(w_{k^{\asterisk}(t)}-w_t)}}{D_{\alpha}(w^M)}} +#' here \eqn{\beta_DD} is the drawdown beta of the instrument. +#'\eqn{k^{\asterisk}(t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_k^M} +#' +#'\eqn{q_t^\asterisk=1/((1-\alpha)T)} if \eqn{d_t^M} is one of the +#'\eqn{(1-\alpha)T} largest drawdowns \eqn{d_1^{M} ,......d_t^M} of the +#'optimal portfolio and \eqn{q_t^\asterisk = 0} otherwise. It is assumed +#'that \eqn{D_\alpha(w^M) {\neq} 0} and that \eqn{q_t^\asterisk} and +#'\eqn{k^{\asterisk}(t) are uniquely determined for all \eqn{t = 1....T} +#' +#'The numerator in \eqn{\beta_DD} is the average rate of return of the +#'instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest +#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{\asterisk}(t)} +#'is the cumulative rate of return of the instrument from the optimal portfolio#' peak time \eqn{k^\asterisk(t)} to time t. +#' +#'The difference in CDaR and standard betas can be explained by the +#'conceptual difference in beta definitions: the standard beta accounts for +#'the fund returns over the whole return history, including the periods +#'when the market goes up, while CDaR betas focus only on market drawdowns +#'and, thus, are not affected when the market performs well. +#' +#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param Rm Return series of the optimal portfolio an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param p confidence level for calculation ,default(p=0.95) +#'@param weights portfolio weighting vector, default NULL, see Details +#'#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE +#' @param invert TRUE/FALSE whether to invert the drawdown measure. see Details. +#'@param \dots any passthru variable. +#' +#'@references +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) +#'with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, +#'September 2012. + +BetaDrawdown<-function(R,Rm,h,p=0.95,weights=NULL,geometric=TRUE,invert=TRUE,...){ + + x = checkData(R) + xm = checkData(Rm) + columnnames = colnames(R) + columns = ncol(R) + drawdowns_m = Drawdowns(Rm) + if(!is.null(weights)){ + x = Returns.portfolio(R,weights) + } + if(geometric){ + cumul_x = cumprod(x+1)-1 + } + else{ + cumul_x = cumsum(x) + } + beta<-function(x){ + q = NULL + q_quantile = quantile(drawdowns_m,1-p) + for(i in 1:nrow(Rm)){ + + if(drawdowns_m[i] Author: pulkit Date: 2013-08-04 23:22:13 +0200 (Sun, 04 Aug 2013) New Revision: 2717 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R Log: multicolumn changes in beta drawdown Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-04 16:00:35 UTC (rev 2716) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-04 21:22:13 UTC (rev 2717) @@ -29,7 +29,7 @@ #'@param Rm Return series of the optimal portfolio an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns #'@param p confidence level for calculation ,default(p=0.95) #'@param weights portfolio weighting vector, default NULL, see Details -#'#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE +#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE #' @param invert TRUE/FALSE whether to invert the drawdown measure. see Details. #'@param \dots any passthru variable. #' @@ -38,7 +38,7 @@ #'with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, #'September 2012. -BetaDrawdown<-function(R,Rm,h,p=0.95,weights=NULL,geometric=TRUE,invert=TRUE,...){ +BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,invert=TRUE,...){ x = checkData(R) xm = checkData(Rm) @@ -50,9 +50,11 @@ } if(geometric){ cumul_x = cumprod(x+1)-1 + cumul_xm = cumprod(xm+1)-1 } else{ cumul_x = cumsum(x) + cumul_xm = cumsum(xm) } beta<-function(x){ q = NULL @@ -60,22 +62,26 @@ for(i in 1:nrow(Rm)){ if(drawdowns_m[i] Author: pulkit Date: 2013-08-05 12:51:30 +0200 (Mon, 05 Aug 2013) New Revision: 2718 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R Log: Changes in Drawdown beta Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-04 21:22:13 UTC (rev 2717) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-05 10:51:30 UTC (rev 2718) @@ -34,12 +34,28 @@ #'@param \dots any passthru variable. #' #'@references -#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) -#'with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, -#'September 2012. +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model +#'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University +#'of Florida,September 2012. +#' +#'@examples +#' +#'BetaDrawdown(edhec[,1],edhec[,2]) #expected value -BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,invert=TRUE,...){ +BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,...){ + # DESCRIPTION: + # + # The function is used to find the Drawdown Beta. + # + # INPUT: + # The Return series of the portfolio and the optimal portfolio + # is taken as the input. + # + # OUTPUT: + # The Drawdown beta is given as the output. + + x = checkData(R) xm = checkData(Rm) columnnames = colnames(R) @@ -56,7 +72,7 @@ cumul_x = cumsum(x) cumul_xm = cumsum(xm) } - beta<-function(x){ + DDbeta<-function(x){ q = NULL q_quantile = quantile(drawdowns_m,1-p) for(i in 1:nrow(Rm)){ @@ -71,7 +87,7 @@ } for(column in 1:columns){ - column.beta = beta(cumul_x[,column]) + column.beta = DDbeta(cumul_x[,column]) if(column == 1){ beta = column.beta } @@ -79,10 +95,13 @@ beta = cbind(beta,column.beta) } } - if(invert){ + + if(columns==1){ + return(beta) } - print(columnnames) + print(beta) colnames(beta) = columnnames + rownames(beta) = paste("Drawdown Beta (p =",p*100,"%)") beta = reclass(beta,R) return(beta) From noreply at r-forge.r-project.org Mon Aug 5 15:56:28 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 5 Aug 2013 15:56:28 +0200 (CEST) Subject: [Returnanalytics-commits] r2719 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130805135628.6464F185736@r-forge.r-project.org> Author: pulkit Date: 2013-08-05 15:56:28 +0200 (Mon, 05 Aug 2013) New Revision: 2719 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R Log: Added files for alpha drawdown Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-05 10:51:30 UTC (rev 2718) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-05 13:56:28 UTC (rev 2719) @@ -40,7 +40,7 @@ #' #'@examples #' -#'BetaDrawdown(edhec[,1],edhec[,2]) #expected value +#'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,...){ @@ -82,7 +82,18 @@ } else q=c(q,0) } - beta_dd = sum((as.numeric(x[which(cumul_xm==max(cumul_xm))])-x)*q)/CDaR(Rm,p=p) + boolean = (cummax(cumul_xm)==cumul_xm) + index = NULL + for(j in 1:nrow(Rm)){ + if(boolean[j] == TRUE){ + index = c(index,j) + b = j + } + else{ + index = c(index,b) + } + } + beta_dd = sum((as.numeric(x[index])-x)*q)/CDaR(Rm,p=p) return(beta_dd) } @@ -99,7 +110,6 @@ if(columns==1){ return(beta) } - print(beta) colnames(beta) = columnnames rownames(beta) = paste("Drawdown Beta (p =",p*100,"%)") beta = reclass(beta,R) Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R 2013-08-05 13:56:28 UTC (rev 2719) @@ -0,0 +1,71 @@ +#' @title +#' Drawdown alpha +#' +#' @description +#' Then the difference between the actual rate of return and the rate of +#' return of the instrument estimated by \eqn{\beta_DD{w_T}} is called CDaR +#' alpha and is given by +#' +#' \deqn{\alpha_DD = w_T - \beta_DD{w_T^M}} +#' +#' here \eqn{\beta_DD} is the beta drawdown. The code for beta drawdown can +#' be found here \code{BetaDrawdown}. +#' +#' Postive \eqn{\alpha_DD} implies that the instrument did better than it was +#' predicted, and consequently, \eqn{\alpha_DD} can be used as a performance +#' measure to rank instrument and to identify those that outperformed their +#' CAPM predictions +#' +#' +#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param Rm Return series of the optimal portfolio an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param p confidence level for calculation ,default(p=0.95) +#'@param weights portfolio weighting vector, default NULL, see Details +#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE +#'@param \dots any passthru variable +#'@references +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model +#'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University +#'of Florida,September 2012. +#' +#'@examples +#' +#' +AlphaDrawdown<-function(R,Rm,p=0.95,weights = NULL,geometric = TRUE,...){ + # DESCRIPTION: + # This function calculates the drawdown alpha given the return series + # and the optimal return series + # + # INPUTS: + # The return series of the portfolio , the return series of the optimal portfolio + # The confidence level, the weights and the type of cumulative returns + + # OUTPUT: + # The drawdown alpha is given as the output + + + # ERROR HANDLING !! + x = checkData(R) + xm = checkData(Rm) + beta = BetaDrawdown(R,Rm,p = p,weights=weights,geometric=geometric,...) + if(!is.null(weights)){ + x = Returns.portfolio(R,weights) + } + if(geometric){ + cumul_x = cumprod(x+1)-1 + cumul_xm = cumprod(xm+1)-1 + } + else{ + cumul_x = cumsum(x) + cumul_xm = cumsum(xm) + } + x_expected = mean(cumul_x) + xm_expected = mean(cumul_xm) + alpha = x_expected - beta*xm_expected + return(alpha) +} + + + + + From noreply at r-forge.r-project.org Mon Aug 5 19:25:28 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 5 Aug 2013 19:25:28 +0200 (CEST) Subject: [Returnanalytics-commits] r2720 - in pkg/FactorAnalytics: R man Message-ID: <20130805172529.04CD7185751@r-forge.r-project.org> Author: chenyian Date: 2013-08-05 19:25:28 +0200 (Mon, 05 Aug 2013) New Revision: 2720 Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r pkg/FactorAnalytics/R/print.FundamentalFactorModel.r pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd Log: debug example of fitFundamentalFactorModel.R Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-05 17:25:28 UTC (rev 2720) @@ -60,7 +60,7 @@ #' #' # fundamental factor model #' # try to find factor contribution to ES for STI -#' data(stock.df) +#' data(Stock.df) #' fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") #' , data=stock,returnsvar = "RETURN",datevar = "DATE", #' assetvar = "TICKER", Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-05 17:25:28 UTC (rev 2720) @@ -64,7 +64,7 @@ #' #' \dontrun{ #' # BARRA type factor model -#' data(stock.df) +#' data(Stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") #' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/R/plot.FundamentalFactorModel.r 2013-08-05 17:25:28 UTC (rev 2720) @@ -43,7 +43,7 @@ #' #' \dontrun{ #' # BARRA type factor model -#' data(stock.df) +#' data(Stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") #' fit.fund <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r 2013-08-05 17:25:28 UTC (rev 2720) @@ -14,7 +14,7 @@ #' @export #' @author Yi-An Chen #' @examples -#' data(stock.df) +#' data(Stock.df) #' fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") #' , data=stock,returnsvar = "RETURN",datevar = "DATE", #' assetvar = "TICKER", Modified: pkg/FactorAnalytics/R/print.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/print.FundamentalFactorModel.r 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/R/print.FundamentalFactorModel.r 2013-08-05 17:25:28 UTC (rev 2720) @@ -9,7 +9,7 @@ #' @author Yi-An Chen. #' @examples #' -#' data(stock.df) +#' data(Stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") #' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r 2013-08-05 17:25:28 UTC (rev 2720) @@ -9,7 +9,7 @@ #' @author Yi-An Chen. #' @examples #' -#' data(stock.df) +#' data(Stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") #' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-05 17:25:28 UTC (rev 2720) @@ -74,7 +74,7 @@ # fundamental factor model # try to find factor contribution to ES for STI -data(stock.df) +data(Stock.df) fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") , data=stock,returnsvar = "RETURN",datevar = "DATE", assetvar = "TICKER", Modified: pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-05 17:25:28 UTC (rev 2720) @@ -89,7 +89,7 @@ \examples{ \dontrun{ # BARRA type factor model -data(stock.df) +data(Stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/man/plot.FundamentalFactorModel.Rd 2013-08-05 17:25:28 UTC (rev 2720) @@ -55,7 +55,7 @@ \examples{ \dontrun{ # BARRA type factor model -data(stock.df) +data(Stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") fit.fund <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd 2013-08-05 17:25:28 UTC (rev 2720) @@ -28,7 +28,7 @@ fit object by \code{fitFundamentalFactorModel} } \examples{ -data(stock.df) +data(Stock.df) fit.fund <- fitFundamentalFactorModel(exposure.names=c("BOOK2MARKET", "LOG.MARKETCAP") , data=stock,returnsvar = "RETURN",datevar = "DATE", assetvar = "TICKER", Modified: pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd 2013-08-05 17:25:28 UTC (rev 2720) @@ -19,7 +19,7 @@ fitFundamentalFactorModel. } \examples{ -data(stock.df) +data(Stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, Modified: pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd 2013-08-05 13:56:28 UTC (rev 2719) +++ pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd 2013-08-05 17:25:28 UTC (rev 2720) @@ -19,7 +19,7 @@ fitFundamentalFactorModel. } \examples{ -data(stock.df) +data(Stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, From noreply at r-forge.r-project.org Mon Aug 5 20:27:47 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 5 Aug 2013 20:27:47 +0200 (CEST) Subject: [Returnanalytics-commits] r2721 - in pkg/FactorAnalytics: R man Message-ID: <20130805182747.DF436184EA8@r-forge.r-project.org> Author: chenyian Date: 2013-08-05 20:27:47 +0200 (Mon, 05 Aug 2013) New Revision: 2721 Removed: pkg/FactorAnalytics/R/portfolioSdDecomposition.R Modified: pkg/FactorAnalytics/R/ pkg/FactorAnalytics/R/factorModelCovariance.r pkg/FactorAnalytics/R/factorModelMonteCarlo.R pkg/FactorAnalytics/man/factorModelMonteCarlo.Rd Log: debug export tag and returnItem issues. Property changes on: pkg/FactorAnalytics/R ___________________________________________________________________ Modified: svn:ignore - FactorAnalytics-package.R bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R dCornishFisher.R factorModelFactorRiskDecomposition.r factorModelGroupRiskDecomposition.r factorModelPerformanceAttribution.r factorModelPortfolioRiskDecomposition.r factorModelRiskAttribution.r factorModelRiskDecomposition.r factorModelSimulation.r impliedFactorReturns.R modifiedEsReport.R modifiedIncrementalES.R modifiedIncrementalVaR.R modifiedPortfolioEsDecomposition.R modifiedPortfolioVaRDecomposition.R modifiedVaRReport.R nonparametricEsReport.R nonparametricIncrementalES.R nonparametricIncrementalVaR.R nonparametricPortfolioEsDecomposition.R nonparametricPortfolioVaRDecomposition.R nonparametricVaRReport.R normalEsReport.R normalIncrementalES.R normalIncrementalVaR.R normalPortfolioEsDecomposition.R normalPortfolioVaRDecomposition.R normalVaRReport.R pCornishFisher.R plot.FM.attribution.r plot.MacroFactorModel.r print.MacroFactorModel.r qCornishFisher.R rCornishFisher.R scenarioPredictions.r scenarioPredictionsPortfolio.r style.QPfit.R style.fit.R summary.FM.attribution.r summary.MacroFactorModel.r table.RollingStyle.R + FactorAnalytics-package.R bootstrapFactorESdecomposition.r bootstrapFactorVaRdecomposition.r chart.RollingStyle.R chart.Style.R covEWMA.R dCornishFisher.R factorModelFactorRiskDecomposition.r factorModelGroupRiskDecomposition.r factorModelPerformanceAttribution.r factorModelPortfolioRiskDecomposition.r factorModelRiskAttribution.r factorModelRiskDecomposition.r factorModelSimulation.r impliedFactorReturns.R modifiedEsReport.R modifiedIncrementalES.R modifiedIncrementalVaR.R modifiedPortfolioEsDecomposition.R modifiedPortfolioVaRDecomposition.R modifiedVaRReport.R nonparametricEsReport.R nonparametricIncrementalES.R nonparametricIncrementalVaR.R nonparametricPortfolioEsDecomposition.R nonparametricPortfolioVaRDecomposition.R nonparametricVaRReport.R normalEsReport.R normalIncrementalES.R normalIncrementalVaR.R normalPortfolioEsDecomposition.R normalPortfolioVaRDecomposition.R normalVaRReport.R pCornishFisher.R plot.FM.attribution.r plot.MacroFactorModel.r portfolioSdDecomposition.R print.MacroFactorModel.r qCornishFisher.R rCornishFisher.R scenarioPredictions.r scenarioPredictionsPortfolio.r style.QPfit.R style.fit.R summary.FM.attribution.r summary.MacroFactorModel.r table.RollingStyle.R Modified: pkg/FactorAnalytics/R/factorModelCovariance.r =================================================================== --- pkg/FactorAnalytics/R/factorModelCovariance.r 2013-08-05 17:25:28 UTC (rev 2720) +++ pkg/FactorAnalytics/R/factorModelCovariance.r 2013-08-05 18:27:47 UTC (rev 2721) @@ -60,6 +60,8 @@ #' ret.cov.fundm <- factorModelCovariance(beta.mat1,fit.fund$factor.cov$cov,fit.fund$resid.variance) #' fit.fund$returns.cov$cov == ret.cov.fundm #' } +#' @export +#' factorModelCovariance <- function(beta, factor.cov, resid.variance) { Modified: pkg/FactorAnalytics/R/factorModelMonteCarlo.R =================================================================== --- pkg/FactorAnalytics/R/factorModelMonteCarlo.R 2013-08-05 17:25:28 UTC (rev 2720) +++ pkg/FactorAnalytics/R/factorModelMonteCarlo.R 2013-08-05 18:27:47 UTC (rev 2721) @@ -36,12 +36,14 @@ #' @param return.residuals logical; if \code{TRUE} then return simulated #' residuals in output list object. #' @return A list with the following components: -#' @returnItem returns \code{n.boot x n.funds} matrix of simulated fund +#' \itemize{ +#' \item returns \code{n.boot x n.funds} matrix of simulated fund #' returns. -#' @returnItem factors \code{n.boot x n.factors} matrix of resampled factor +#' \item factors \code{n.boot x n.factors} matrix of resampled factor #' returns. Returned only if \code{return.factors = TRUE}. -#' @returnItem residuals \code{n.boot x n.funds} matrix of simulated fund +#' \item residuals \code{n.boot x n.funds} matrix of simulated fund #' residuals. Returned only if \code{return.residuals = TRUE}. +#' } #' @author Eric Zivot and Yi-An Chen. #' @references Jiang, Y. (2009). UW PhD Thesis. #' @export Deleted: pkg/FactorAnalytics/R/portfolioSdDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/portfolioSdDecomposition.R 2013-08-05 17:25:28 UTC (rev 2720) +++ pkg/FactorAnalytics/R/portfolioSdDecomposition.R 2013-08-05 18:27:47 UTC (rev 2721) @@ -1,79 +0,0 @@ -#' Compute portfolio sd (risk) decomposition by asset. -#' -#' Compute portfolio sd (risk) decomposition by asset. -#' -#' -#' @param w.vec N x 1 vector of portfolio weights -#' @param cov.assets N x N asset covariance matrix -#' @return an S3 list containing -#' @returnItem sd.p Scalar, portfolio standard deviation. -#' @returnItem mcsd.p 1 x N vector, marginal contributions to portfolio -#' standard deviation. -#' @returnItem csd.p 1 x N vector, contributions to portfolio standard -#' deviation. -#' @returnItem pcsd.p 1 x N vector, percent contribution to portfolio standard -#' deviation. -#' @author Eric Zivot and Yi-An Chen -#' @references Qian, Hua and Sorensen (2007) Quantitative Equity Portfolio -#' Management, chapter 3. -#' @examples -#' -#' # load data from the database -#' data(managers.df) -#' ret.assets = managers.df[,(1:6)] -#' factors = managers.df[,(7:9)] -#' # fit the factor model with OLS -#' fit <-fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", -#' variable.selection="all subsets", factor.set = 3) -#' # factor SD decomposition for HAM1 -#' cov.factors = var(factors) -#' manager.names = colnames(managers.df[,(1:6)]) -#' factor.names = colnames(managers.df[,(7:9)]) -#' # assuming equal weight vector -#' w.vec = rep(1/6,6) -#' # compute with sample covariance matrix (omit NA) -#' cov.sample = ccov(managers.df[,manager.names],na.action=na.omit) -#' port.sd.decomp.sample = portfolioSdDecomposition(w.vec, cov.sample$cov) -#' # show bar chart -#' barplot(port.sd.decomp.sample$pcsd.p, -#' main="Fund Percent Contributions to Portfolio SD", -#' ylab="Percent Contribution", legend.text=FALSE, -#' col="blue") -#' -#' # compute with factor model covariance matrix -#' returnCov.mat<- factorModelCovariance(fit$beta.mat,var(factors),fit$residVars.vec) -#' port.sd.decomp.fm = portfolioSdDecomposition(w.vec, returnCov.mat) -#' -#' -portfolioSdDecomposition <- -function(w.vec, cov.assets) { -## Inputs: -## w.vec n x 1 vector of portfolio weights -## cov.assets n x n asset covariance matrix -## Output: -## A list with the following components: -## sd.p scalar, portfolio sd -## mcsd.p 1 x n vector, marginal contributions to portfolio sd -## csd.p 1 x n vector, contributions to portfolio sd -## pcsd.p 1 x n vector, percent contribution to portfolio sd - - if (any(diag(chol(cov.assets)) == 0)) - warning("Asset covariance matrix is not positive definite") - ## compute portfolio level variance - var.p = as.numeric(t(w.vec) %*% cov.assets %*% w.vec) - sd.p = sqrt(var.p) - ## compute marginal, component and percentage contributions to risk - mcsd.p = (cov.assets %*% w.vec)/sd.p - csd.p = w.vec*mcsd.p - pcsd.p = csd.p/sd.p - colnames(mcsd.p) = "MCSD" - colnames(csd.p) = "CSD" - colnames(pcsd.p) = "PCSD" - ## return results - ans = list(sd.p=sd.p, - mcsd.p=t(mcsd.p), - csd.p=t(csd.p), - pcsd.p=t(pcsd.p)) - return(ans) -} - Modified: pkg/FactorAnalytics/man/factorModelMonteCarlo.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelMonteCarlo.Rd 2013-08-05 17:25:28 UTC (rev 2720) +++ pkg/FactorAnalytics/man/factorModelMonteCarlo.Rd 2013-08-05 18:27:47 UTC (rev 2721) @@ -56,7 +56,14 @@ return simulated residuals in output list object.} } \value{ - A list with the following components: + A list with the following components: \itemize{ \item + returns \code{n.boot x n.funds} matrix of simulated fund + returns. \item factors \code{n.boot x n.factors} matrix + of resampled factor returns. Returned only if + \code{return.factors = TRUE}. \item residuals + \code{n.boot x n.funds} matrix of simulated fund + residuals. Returned only if \code{return.residuals = + TRUE}. } } \description{ Simulate returns using factor model Monte Carlo method. From noreply at r-forge.r-project.org Mon Aug 5 20:49:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 5 Aug 2013 20:49:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2722 - in pkg/FactorAnalytics: R man Message-ID: <20130805184935.6036F184EDC@r-forge.r-project.org> Author: chenyian Date: 2013-08-05 20:49:35 +0200 (Mon, 05 Aug 2013) New Revision: 2722 Removed: pkg/FactorAnalytics/man/portfolioSdDecomposition.Rd Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R pkg/FactorAnalytics/man/ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd Log: debug examples. Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-05 18:27:47 UTC (rev 2721) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-05 18:49:35 UTC (rev 2722) @@ -66,15 +66,14 @@ #' assetvar = "TICKER", #' wls = TRUE, regression = "classic", #' covariance = "classic", full.resid.cov = FALSE) -#' idx <- fit.fund$data[,fit.fund$assetvar] == "STI" -#' asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] -#' tmpData = cbind(asset.ret, fit.fund$factors, -#' fit.fund$residuals[,"STI"]/sqrt(fit.fund$resid.variance["STI"]) ) -#' colnames(tmpData)[c(1,length(tmpData[1,]))] = c("STI", "residual") -#' factorModelEsDecomposition(tmpData, +#' idx <- fit.fund$data[,fit.fund$assetvar] == "STI" +#' asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] +#' tmpData = cbind(asset.ret, fit.fund$factor.returns, +#' fit.fund$residuals[,"STI"]/sqrt(fit.fund$resid.variance["STI"]) ) +#' colnames(tmpData)[c(1,length(tmpData[1,]))] = c("STI", "residual") +#' factorModelEsDecomposition(tmpData, #' fit.fund$beta["STI",], -#' fit.fund$resid.variance["STI"], tail.prob=0.05, -#' VaR.method = "historical" ) +#' fit.fund$resid.variance["STI"], tail.prob=0.05,VaR.method="historical") #' #' @export #' Modified: pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-05 18:27:47 UTC (rev 2721) +++ pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-05 18:49:35 UTC (rev 2722) @@ -72,7 +72,7 @@ #' \dontrun{ #' # load data from the database #' data(managers.df) -#' fit <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +#' fit <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), #' factors.names=c("EDHEC.LS.EQ","SP500.TR"), #' data=managers.df,fit.method="OLS") #' # summary of HAM1 Property changes on: pkg/FactorAnalytics/man ___________________________________________________________________ Modified: svn:ignore - CornishFisher.Rd Style.Rd covEWMA.Rd factorModelPerformanceAttribution.Rd impliedFactorReturns.Rd modifiedEsReport.Rd modifiedIncrementalES.Rd modifiedIncrementalVaR.Rd modifiedPortfolioEsDecomposition.Rd modifiedPortfolioVaRDecomposition.Rd modifiedVaRReport.Rd nonparametricEsReport.Rd nonparametricIncrementalES.Rd nonparametricIncrementalVaR.Rd nonparametricPortfolioEsDecomposition.Rd nonparametricPortfolioVaRDecomposition.Rd nonparametricVaRReport.Rd normalEsReport.Rd normalIncrementalES.Rd normalIncrementalVaR.Rd normalPortfolioEsDecomposition.Rd normalPortfolioVaRDecomposition.Rd normalVaRReport.Rd plot.FM.attribution.Rd plot.MacroFactorModel.Rd print.MacroFactorModel.Rd scenarioPredictions.Rd scenarioPredictionsPortfolio.Rd stock.Rd summary.FM.attribution.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd + CornishFisher.Rd Style.Rd covEWMA.Rd factorModelPerformanceAttribution.Rd impliedFactorReturns.Rd modifiedEsReport.Rd modifiedIncrementalES.Rd modifiedIncrementalVaR.Rd modifiedPortfolioEsDecomposition.Rd modifiedPortfolioVaRDecomposition.Rd modifiedVaRReport.Rd nonparametricEsReport.Rd nonparametricIncrementalES.Rd nonparametricIncrementalVaR.Rd nonparametricPortfolioEsDecomposition.Rd nonparametricPortfolioVaRDecomposition.Rd nonparametricVaRReport.Rd normalEsReport.Rd normalIncrementalES.Rd normalIncrementalVaR.Rd normalPortfolioEsDecomposition.Rd normalPortfolioVaRDecomposition.Rd normalVaRReport.Rd plot.FM.attribution.Rd plot.MacroFactorModel.Rd portfolioSdDecomposition.Rd print.MacroFactorModel.Rd scenarioPredictions.Rd scenarioPredictionsPortfolio.Rd stock.Rd summary.FM.attribution.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-05 18:27:47 UTC (rev 2721) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-05 18:49:35 UTC (rev 2722) @@ -81,14 +81,13 @@ wls = TRUE, regression = "classic", covariance = "classic", full.resid.cov = FALSE) idx <- fit.fund$data[,fit.fund$assetvar] == "STI" - asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] - tmpData = cbind(asset.ret, fit.fund$factors, - fit.fund$residuals[,"STI"]/sqrt(fit.fund$resid.variance["STI"]) ) - colnames(tmpData)[c(1,length(tmpData[1,]))] = c("STI", "residual") - factorModelEsDecomposition(tmpData, +asset.ret <- fit.fund$data[idx,fit.fund$returnsvar] +tmpData = cbind(asset.ret, fit.fund$factor.returns, + fit.fund$residuals[,"STI"]/sqrt(fit.fund$resid.variance["STI"]) ) +colnames(tmpData)[c(1,length(tmpData[1,]))] = c("STI", "residual") +factorModelEsDecomposition(tmpData, fit.fund$beta["STI",], - fit.fund$resid.variance["STI"], tail.prob=0.05, - VaR.method = "historical" ) + fit.fund$resid.variance["STI"], tail.prob=0.05,VaR.method="historical") } \author{ Eric Zviot and Yi-An Chen. Deleted: pkg/FactorAnalytics/man/portfolioSdDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/portfolioSdDecomposition.Rd 2013-08-05 18:27:47 UTC (rev 2721) +++ pkg/FactorAnalytics/man/portfolioSdDecomposition.Rd 2013-08-05 18:49:35 UTC (rev 2722) @@ -1,52 +0,0 @@ -\name{portfolioSdDecomposition} -\alias{portfolioSdDecomposition} -\title{Compute portfolio sd (risk) decomposition by asset.} -\usage{ - portfolioSdDecomposition(w.vec, cov.assets) -} -\arguments{ - \item{w.vec}{N x 1 vector of portfolio weights} - - \item{cov.assets}{N x N asset covariance matrix} -} -\value{ - an S3 list containing -} -\description{ - Compute portfolio sd (risk) decomposition by asset. -} -\examples{ -# load data from the database -data(managers.df) -ret.assets = managers.df[,(1:6)] -factors = managers.df[,(7:9)] -# fit the factor model with OLS -fit <-fitMacroeconomicFactorModel(ret.assets,factors,fit.method="OLS", - variable.selection="all subsets", factor.set = 3) -# factor SD decomposition for HAM1 -cov.factors = var(factors) -manager.names = colnames(managers.df[,(1:6)]) -factor.names = colnames(managers.df[,(7:9)]) -# assuming equal weight vector -w.vec = rep(1/6,6) -# compute with sample covariance matrix (omit NA) -cov.sample = ccov(managers.df[,manager.names],na.action=na.omit) -port.sd.decomp.sample = portfolioSdDecomposition(w.vec, cov.sample$cov) -# show bar chart -barplot(port.sd.decomp.sample$pcsd.p, - main="Fund Percent Contributions to Portfolio SD", - ylab="Percent Contribution", legend.text=FALSE, - col="blue") - -# compute with factor model covariance matrix -returnCov.mat<- factorModelCovariance(fit$beta.mat,var(factors),fit$residVars.vec) -port.sd.decomp.fm = portfolioSdDecomposition(w.vec, returnCov.mat) -} -\author{ - Eric Zivot and Yi-An Chen -} -\references{ - Qian, Hua and Sorensen (2007) Quantitative Equity - Portfolio Management, chapter 3. -} - From noreply at r-forge.r-project.org Mon Aug 5 21:22:22 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 5 Aug 2013 21:22:22 +0200 (CEST) Subject: [Returnanalytics-commits] r2723 - pkg/Meucci/demo Message-ID: <20130805192222.766D5183FA3@r-forge.r-project.org> Author: xavierv Date: 2013-08-05 21:22:22 +0200 (Mon, 05 Aug 2013) New Revision: 2723 Added: pkg/Meucci/demo/S_FactorAnalysisNotOk.R Log: - added FactorAnalysisNotOK demo script from chapter 3 Added: pkg/Meucci/demo/S_FactorAnalysisNotOk.R =================================================================== --- pkg/Meucci/demo/S_FactorAnalysisNotOk.R (rev 0) +++ pkg/Meucci/demo/S_FactorAnalysisNotOk.R 2013-08-05 19:22:22 UTC (rev 2723) @@ -0,0 +1,47 @@ +#'This script illustrates the hidden factor analysis puzzle, as described in A. Meucci, +#'"Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_FactorAnalysisNotOk.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + + +################################################################################################################## +### Inputs + +N = 5; # market dimension +K = 2; # factors dimension +J = 10000; # numbers of simulations + +################################################################################################################## +### Define true hidden loadings B + +B = matrix( runif( N*K ), N, K ) - 0.5; + +B = B / sqrt( 1.5 * max( max( B %*% t(B) ) ) ); + +# define true hidden idiosyncratic variances +D = array( 1, N) - diag( B %*% t(B) ); + +# define true hidden global covariance +S = B %*% t( B ) + diag( D, length(D) ); + +# generate normal variables with matching moments +X = MvnRnd( matrix( 0, N, 1 ), S, J ); + +# recover loadings FA$loadings, idiosyncratic variances FA$uniquenesness and factors FA$scores by factor analysis +#[FA$loadings, FA$uniquenesness, T_, stats, F_] +FA = factanal(X, K, scores = "Bartlett" ); + +# factor analysis recovers the structure exactly however... +S_ = FA$loadings %*% t( FA$loadings ) + diag( FA$uniquenesses, length( FA$uniquenesses) ); +Match = 1 - max( abs( ( S - S_) / S) ); +print(Match); + +# ...the systematic+idiosyncratic decomposition is NOT recovered +U_ = X - FA$scores %*% t(FA$loadings); # compute residuals +S_U = cor( U_ ); # compute correlations +# residuals are not idiosyncratic +print( S_U ); \ No newline at end of file From noreply at r-forge.r-project.org Mon Aug 5 22:34:55 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 5 Aug 2013 22:34:55 +0200 (CEST) Subject: [Returnanalytics-commits] r2724 - in pkg/FactorAnalytics: R man Message-ID: <20130805203455.7B5B4184D31@r-forge.r-project.org> Author: chenyian Date: 2013-08-05 22:34:55 +0200 (Mon, 05 Aug 2013) New Revision: 2724 Modified: pkg/FactorAnalytics/R/factorModelMonteCarlo.R pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r pkg/FactorAnalytics/R/predict.StatFactorModel.r pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r pkg/FactorAnalytics/R/print.FundamentalFactorModel.r pkg/FactorAnalytics/R/print.TimeSeriesFactorModel.r pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r pkg/FactorAnalytics/R/summary.StatFactorModel.r pkg/FactorAnalytics/man/Stock.df.Rd pkg/FactorAnalytics/man/factorModelMonteCarlo.Rd pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/predict.StatFactorModel.Rd pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/print.TimeSeriesFactorModel.Rd pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd pkg/FactorAnalytics/man/summary.StatFactorModel.Rd Log: debug examples. Modified: pkg/FactorAnalytics/R/factorModelMonteCarlo.R =================================================================== --- pkg/FactorAnalytics/R/factorModelMonteCarlo.R 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/R/factorModelMonteCarlo.R 2013-08-05 20:34:55 UTC (rev 2724) @@ -51,12 +51,12 @@ #' #' # load data from the database #' data(managers.df) -#' fit <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +#' fit <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), #' factors.names=c("EDHEC.LS.EQ","SP500.TR"), #' data=managers.df,fit.method="OLS") -#' factorData=factors -#' Beta.mat=fit$beta.mat -#' residualData=as.matrix(fit$residVars.vec,1,6) +#' factorData= managers.df[,c("EDHEC.LS.EQ","SP500.TR")] +#' Beta.mat=fit$beta +#' residualData=as.matrix(fit$resid.variance,1,6) #' n.boot=1000 #' # bootstrap returns data from factor model with residuals sample from normal distribution #' bootData <- factorModelMonteCarlo(n.boot, factorData,Beta.mat, residual.dist="normal", Modified: pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/R/predict.FundamentalFactorModel.r 2013-08-05 20:34:55 UTC (rev 2724) @@ -21,13 +21,13 @@ #' wls = TRUE, regression = "classic", #' covariance = "classic", full.resid.cov = FALSE) #' # If not specify anything, predict() will give fitted value -#' predict(fit.fund) +#' pred.fund <- predict(fit.fund) #' #' # generate random data -#' testdata <- data[,c("DATE","TICKER")] +#' testdata <- stock[,c("DATE","TICKER")] #' testdata$BOOK2MARKET <- rnorm(n=42465) #' testdata$LOG.MARKETCAP <- rnorm(n=42465) -#' predict(fit.fund,testdata,new.assetvar="TICKER",new.datevar="DATE") +#' pred.fund2 <- predict(fit.fund,testdata,new.assetvar="TICKER",new.datevar="DATE") #' #' predict.FundamentalFactorModel <- function(object,newdata,new.assetvar,new.datevar){ Modified: pkg/FactorAnalytics/R/predict.StatFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/predict.StatFactorModel.r 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/R/predict.StatFactorModel.r 2013-08-05 20:34:55 UTC (rev 2724) @@ -7,16 +7,13 @@ #' @param newdata a vector, matrix, data.frame, xts, timeSeries or zoo object to be coerced. #' @param ... Any other arguments used in \code{predict.lm}. For example like newdata and fit.se. #' @author Yi-An Chen. -#' ' +#' @method predict StatFactorModel +#' @export #' @examples #' data(stat.fm.data) -#'.fit <- fitStatisticalFactorModel(sfm.dat,k=2, -# ckeckData.method="data.frame") +#' fit <- fitStatisticalFactorModel(sfm.dat,k=2) +#' pred.stat <- predict(fit) #' -#' predict(fit) -#' @method predict StatFactorModel -#' @export -#' predict.StatFactorModel <- function(object,newdata = NULL,...){ Modified: pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/R/predict.TimeSeriesFactorModel.r 2013-08-05 20:34:55 UTC (rev 2724) @@ -14,12 +14,14 @@ #' data(managers.df) #' ret.assets = managers.df[,(1:6)] #' # fit the factor model with OLS -#' fit <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +#' fit <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), #' factors.names=c("EDHEC.LS.EQ","SP500.TR"), #' data=managers.df,fit.method="OLS") #' -#' predict(fit) -#' predict(fit,newdata,interval="confidence") +#' pred.fit <- predict(fit) +#' newdata <- data.frame(EDHEC.LS.EQ = rnorm(n=120), SP500.TR = rnorm(n=120) ) +#' rownames(newdata) <- rownames(fit$data) +#' pred.fit2 <- predict(fit,newdata,interval="confidence") #' #' @method predict TimeSeriesFactorModel #' @export Modified: pkg/FactorAnalytics/R/print.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/print.FundamentalFactorModel.r 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/R/print.FundamentalFactorModel.r 2013-08-05 20:34:55 UTC (rev 2724) @@ -7,12 +7,14 @@ #' @param digits integer indicating the number of decimal places. Default is 3. #' @param ... Other arguments for print methods. #' @author Yi-An Chen. +#' @method print FundamentalFactorModel +#' @export #' @examples #' #' data(Stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") -#' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, +#' test.fit <- fitFundamentalFactorModel(data=stock,exposure.names=exposure.names, #' datevar = "DATE", returnsvar = "RETURN", #' assetvar = "TICKER", wls = TRUE, #' regression = "classic", @@ -20,8 +22,7 @@ #' robust.scale = TRUE) #' #' print(test.fit) -#' @method print FundamentalFactorModel -#' @export +#' print.FundamentalFactorModel <- function(x, digits = max(3, .Options$digits - 3), ...) { Modified: pkg/FactorAnalytics/R/print.TimeSeriesFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/print.TimeSeriesFactorModel.r 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/R/print.TimeSeriesFactorModel.r 2013-08-05 20:34:55 UTC (rev 2724) @@ -7,16 +7,17 @@ #' @param digits integer indicating the number of decimal places. Default is 3. #' @param ... arguments to be passed to print method. #' @author Yi-An Chen. +#' @method print TimeSeriesFactorModel +#' @export #' @examples #' #' # load data from the database #' data(managers.df) -#' fit.macro <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +#' fit.macro <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), #' factors.names=c("EDHEC.LS.EQ","SP500.TR"), #' data=managers.df,fit.method="OLS") #' print(fit.macro) -#' @method print TimeSeriesFactorModel -#' @export +#' print.TimeSeriesFactorModel <- function(x,digits=max(3, .Options$digits - 3),...){ if(!is.null(cl <- x$call)) { cat("\nCall:\n") Modified: pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/R/summary.FundamentalFactorModel.r 2013-08-05 20:34:55 UTC (rev 2724) @@ -7,12 +7,14 @@ #' @param digits integer indicating the number of decimal places. Default is 3. #' @param ... Other arguments for print methods. #' @author Yi-An Chen. +#' @method summary FundamentalFactorModel +#' @export #' @examples #' #' data(Stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") -#' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, +#' test.fit <- fitFundamentalFactorModel(data=stock,exposure.names=exposure.names, #' datevar = "DATE", returnsvar = "RETURN", #' assetvar = "TICKER", wls = TRUE, #' regression = "classic", @@ -20,8 +22,7 @@ #' robust.scale = TRUE) #' #' summary(test.fit) -#' @method summary FundamentalFactorModel -#' @export +#' summary.FundamentalFactorModel <- function(object, digits = max(3, .Options$digits - 3), ...) { Modified: pkg/FactorAnalytics/R/summary.StatFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/summary.StatFactorModel.r 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/R/summary.StatFactorModel.r 2013-08-05 20:34:55 UTC (rev 2724) @@ -7,17 +7,17 @@ #' @param digits Integer indicating the number of decimal places. Default is 3. #' @param ... other option used in \code{summary.lm} #' @author Yi-An Chen. +#' @method summary StatFactorModel +#' @export #' @examples #' #' # load data from the database -#' data(managers.df) +#' data(stat.fm.data) #' # fit the factor model with OLS -#' fit <- fitStatisticalFactorModel(fitStatisticalFactorModel(sfm.dat,k=2, -#' ckeckData.method="data.frame")) +#' fit <- fitStatisticalFactorModel(sfm.dat,k=2) #' summary(fit) -#' @method summary StatFactorModel -#' @export #' +#' summary.StatFactorModel <- function(object,digits=3){ if(!is.null(cl <- object$call)) { cat("\nCall:\n") Modified: pkg/FactorAnalytics/man/Stock.df.Rd =================================================================== --- pkg/FactorAnalytics/man/Stock.df.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/Stock.df.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -1,7 +1,6 @@ \docType{data} \name{Stock.df} \alias{Stock.df} -\alias{stock} \title{constructed NYSE 447 assets from 1996-01-01 through 2003-12-31.} \description{ constructed NYSE 447 assets from 1996-01-01 through Modified: pkg/FactorAnalytics/man/factorModelMonteCarlo.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelMonteCarlo.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/factorModelMonteCarlo.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -79,12 +79,12 @@ \examples{ # load data from the database data(managers.df) -fit <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +fit <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), factors.names=c("EDHEC.LS.EQ","SP500.TR"), data=managers.df,fit.method="OLS") -factorData=factors -Beta.mat=fit$beta.mat -residualData=as.matrix(fit$residVars.vec,1,6) +factorData= managers.df[,c("EDHEC.LS.EQ","SP500.TR")] +Beta.mat=fit$beta +residualData=as.matrix(fit$resid.variance,1,6) n.boot=1000 # bootstrap returns data from factor model with residuals sample from normal distribution bootData <- factorModelMonteCarlo(n.boot, factorData,Beta.mat, residual.dist="normal", Modified: pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -105,7 +105,7 @@ \dontrun{ # load data from the database data(managers.df) -fit <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +fit <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), factors.names=c("EDHEC.LS.EQ","SP500.TR"), data=managers.df,fit.method="OLS") # summary of HAM1 Modified: pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/predict.FundamentalFactorModel.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -35,13 +35,13 @@ wls = TRUE, regression = "classic", covariance = "classic", full.resid.cov = FALSE) # If not specify anything, predict() will give fitted value -predict(fit.fund) +pred.fund <- predict(fit.fund) # generate random data -testdata <- data[,c("DATE","TICKER")] +testdata <- stock[,c("DATE","TICKER")] testdata$BOOK2MARKET <- rnorm(n=42465) testdata$LOG.MARKETCAP <- rnorm(n=42465) -predict(fit.fund,testdata,new.assetvar="TICKER",new.datevar="DATE") +pred.fund2 <- predict(fit.fund,testdata,new.assetvar="TICKER",new.datevar="DATE") } \author{ Yi-An Chen Modified: pkg/FactorAnalytics/man/predict.StatFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/predict.StatFactorModel.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/predict.StatFactorModel.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -22,11 +22,10 @@ } \examples{ data(stat.fm.data) -.fit <- fitStatisticalFactorModel(sfm.dat,k=2, - -predict(fit) +fit <- fitStatisticalFactorModel(sfm.dat,k=2) +pred.stat <- predict(fit) } \author{ - Yi-An Chen. ' + Yi-An Chen. } Modified: pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/predict.TimeSeriesFactorModel.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -25,12 +25,14 @@ data(managers.df) ret.assets = managers.df[,(1:6)] # fit the factor model with OLS -fit <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +fit <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), factors.names=c("EDHEC.LS.EQ","SP500.TR"), data=managers.df,fit.method="OLS") -predict(fit) -predict(fit,newdata,interval="confidence") +pred.fit <- predict(fit) +newdata <- data.frame(EDHEC.LS.EQ = rnorm(n=120), SP500.TR = rnorm(n=120) ) +rownames(newdata) <- rownames(fit$data) +pred.fit2 <- predict(fit,newdata,interval="confidence") } \author{ Yi-An Chen. Modified: pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/print.FundamentalFactorModel.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -22,7 +22,7 @@ data(Stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") -test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, +test.fit <- fitFundamentalFactorModel(data=stock,exposure.names=exposure.names, datevar = "DATE", returnsvar = "RETURN", assetvar = "TICKER", wls = TRUE, regression = "classic", Modified: pkg/FactorAnalytics/man/print.TimeSeriesFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/print.TimeSeriesFactorModel.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/print.TimeSeriesFactorModel.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -20,7 +20,7 @@ \examples{ # load data from the database data(managers.df) -fit.macro <- fitTimeseriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +fit.macro <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), factors.names=c("EDHEC.LS.EQ","SP500.TR"), data=managers.df,fit.method="OLS") print(fit.macro) Modified: pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/summary.FundamentalFactorModel.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -22,7 +22,7 @@ data(Stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") -test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, +test.fit <- fitFundamentalFactorModel(data=stock,exposure.names=exposure.names, datevar = "DATE", returnsvar = "RETURN", assetvar = "TICKER", wls = TRUE, regression = "classic", Modified: pkg/FactorAnalytics/man/summary.StatFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/summary.StatFactorModel.Rd 2013-08-05 19:22:22 UTC (rev 2723) +++ pkg/FactorAnalytics/man/summary.StatFactorModel.Rd 2013-08-05 20:34:55 UTC (rev 2724) @@ -19,10 +19,9 @@ } \examples{ # load data from the database -data(managers.df) +data(stat.fm.data) # fit the factor model with OLS -fit <- fitStatisticalFactorModel(fitStatisticalFactorModel(sfm.dat,k=2, - ckeckData.method="data.frame")) +fit <- fitStatisticalFactorModel(sfm.dat,k=2) summary(fit) } \author{ From noreply at r-forge.r-project.org Tue Aug 6 05:38:12 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 6 Aug 2013 05:38:12 +0200 (CEST) Subject: [Returnanalytics-commits] r2725 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130806033812.C84C4184F53@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-06 05:38:07 +0200 (Tue, 06 Aug 2013) New Revision: 2725 Added: pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.pso.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/extractstats.R Log: adding extractStats method for pso optimization method Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-05 20:34:55 UTC (rev 2724) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-06 03:38:07 UTC (rev 2725) @@ -26,6 +26,7 @@ export(extract.efficient.frontier) export(extractStats.optimize.portfolio.DEoptim) export(extractStats.optimize.portfolio.parallel) +export(extractStats.optimize.portfolio.pso) export(extractStats.optimize.portfolio.random) export(extractStats.optimize.portfolio.ROI) export(extractStats) Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-05 20:34:55 UTC (rev 2724) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-06 03:38:07 UTC (rev 2725) @@ -230,4 +230,70 @@ rnames<-c('out',paste('w',names(object$weights),sep='.')) names(result)<-rnames return(result) -} \ No newline at end of file +} + +#' extract some stats from a portfolio list run with pso via +#' \code{\link{optimize.portfolio}} +#' +#' This function will extract the weights (swarm positions) from the PSO output +#' and the out value (swarm fitness values) for each iteration of the optimization. +#' +#' @param object list returned by optimize.portfolio +#' @param prefix prefix to add to output row names +#' @param ... any other passthru parameters +#' @author Ross Bennett +#' @export +extractStats.optimize.portfolio.pso <- function(object, prefix=NULL, ...){ + if(!inherits(object, "optimize.portfolio.pso")) stop("object must be of class optimize.portfolio.pso") + + normalize_weights <- function(weights){ + # normalize results if necessary + if(!is.null(constraints$min_sum) | !is.null(constraints$max_sum)){ + # the user has passed in either min_sum or max_sum constraints for the portfolio, or both. + # we'll normalize the weights passed in to whichever boundary condition has been violated + # NOTE: this means that the weights produced by a numeric optimization algorithm like DEoptim + # might violate your constraints, so you'd need to renormalize them after optimizing + # we'll create functions for that so the user is less likely to mess it up. + + ##' NOTE: need to normalize in the optimization wrapper too before we return, since we've normalized in here + ##' In Kris' original function, this was manifested as a full investment constraint + if(!is.null(constraints$max_sum) & constraints$max_sum != Inf ) { + max_sum=constraints$max_sum + if(sum(weights)>max_sum) { weights<-(max_sum/sum(weights))*weights } # normalize to max_sum + } + + if(!is.null(constraints$min_sum) & constraints$min_sum != -Inf ) { + min_sum=constraints$min_sum + if(sum(weights) Author: rossbennett34 Date: 2013-08-06 05:45:03 +0200 (Tue, 06 Aug 2013) New Revision: 2726 Added: pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.GenSA.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/extractstats.R Log: adding extractStats method for GenSA optimization method Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-06 03:38:07 UTC (rev 2725) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-06 03:45:03 UTC (rev 2726) @@ -25,6 +25,7 @@ export(diversification) export(extract.efficient.frontier) export(extractStats.optimize.portfolio.DEoptim) +export(extractStats.optimize.portfolio.GenSA) export(extractStats.optimize.portfolio.parallel) export(extractStats.optimize.portfolio.pso) export(extractStats.optimize.portfolio.random) Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-06 03:38:07 UTC (rev 2725) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-06 03:45:03 UTC (rev 2726) @@ -297,3 +297,25 @@ rownames(result) <- paste(prefix, "pso.portf", index(tmp), sep=".") return(result) } + +#' extract some stats from a portfolio list run with GenSA via +#' \code{\link{optimize.portfolio}} +#' +#' This function will extract the optimal portfolio weights and objective measures +#' The GenSA output does not store weights evaluated at each iteration +#' The GenSA output for trace.mat contains nb.steps, temperature, function.value, and current.minimum +#' +#' @param object list returned by optimize.portfolio +#' @param prefix prefix to add to output row names +#' @param ... any other passthru parameters +#' @export +extractStats.optimize.portfolio.GenSA <- function(object, prefix=NULL, ...) { + + trow<-c(out=object$out, object$weights) + obj <- unlist(object$objective_measures) + result <- c(obj, trow) + + rnames<-c('out',paste('w',names(object$weights),sep='.')) + names(result)<-rnames + return(result) +} Added: pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.GenSA.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.GenSA.Rd 2013-08-06 03:45:03 UTC (rev 2726) @@ -0,0 +1,23 @@ +\name{extractStats.optimize.portfolio.GenSA} +\alias{extractStats.optimize.portfolio.GenSA} +\title{extract some stats from a portfolio list run with GenSA via +\code{\link{optimize.portfolio}}} +\usage{ + extractStats.optimize.portfolio.GenSA(object, + prefix = NULL, ...) +} +\arguments{ + \item{object}{list returned by optimize.portfolio} + + \item{prefix}{prefix to add to output row names} + + \item{...}{any other passthru parameters} +} +\description{ + This function will extract the optimal portfolio weights + and objective measures The GenSA output does not store + weights evaluated at each iteration The GenSA output for + trace.mat contains nb.steps, temperature, function.value, + and current.minimum +} + From noreply at r-forge.r-project.org Tue Aug 6 11:34:39 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 6 Aug 2013 11:34:39 +0200 (CEST) Subject: [Returnanalytics-commits] r2727 - pkg/Meucci/demo Message-ID: <20130806093439.226D9184C83@r-forge.r-project.org> Author: xavierv Date: 2013-08-06 11:34:38 +0200 (Tue, 06 Aug 2013) New Revision: 2727 Added: pkg/Meucci/demo/S_ResidualAnalysisTheory.R Log: - added S_ResidualAnalysisTheory demo script from chapter 3 Added: pkg/Meucci/demo/S_ResidualAnalysisTheory.R =================================================================== --- pkg/Meucci/demo/S_ResidualAnalysisTheory.R (rev 0) +++ pkg/Meucci/demo/S_ResidualAnalysisTheory.R 2013-08-06 09:34:38 UTC (rev 2727) @@ -0,0 +1,102 @@ +#' This script performs the analysis of residuals, as described in A. Meucci, "Risk and Asset Allocation", +#' Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_ResidualAnalysisTheory.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +################################################################################################################## +### Inputs +N = 3; # number of stocks +K = 2; # number of factors +tau = 12 / 12; # years +nSim = 10000; +useNonRandomFactor = FALSE; # NB: change here to true/false +useZeroMeanLinearFactors = TRUE; # NB: change here to true/false + +################################################################################################################## +### Generate covariance matrix with volatility 25# +Diag = 0.25 * array( 1, N+K ); +# make one factor deterministic -> add constant +if( useNonRandomFactor ) +{ + Diag[ length(Diag) ] = 10^( -26 ); +} +dd = matrix( runif( (N + K)^2), N + K, N + K ) - 0.5; +dd = dd %*% t( dd ); +C = cov2cor( dd ); + +covJointXF = diag( Diag, length(Diag) ) %*% C %*% diag( Diag, length(Diag) ); + +SigmaX = covJointXF[ 1:N, 1:N ]; +SigmaXF = covJointXF[ 1:N, (N+1):(N+K) ]; +SigmaF = covJointXF[ (N+1):(N+K), (N+1):(N+K) ]; + +################################################################################################################## +### Generate mean returns for stock and factors ~ 10# per annum +muX = 0.1 * rnorm(N); +muF = 0.1 * rnorm(K); + +################################################################################################################## +### Statitics of linear returns analytically, since Y = 1+[R; Z] is lognormal +mu = c( muX, muF ); +E_Y = exp( mu * tau + diag( covJointXF * tau ) / 2); +E_YY = ( E_Y %*% t(E_Y) ) * exp( covJointXF * tau ); +E_R = E_Y[ 1:N ] - array( 1, N ); +E_Z = E_Y[ (N+1):length(E_Y) ] - array( 1, K ); +E_RR = E_YY[ 1:N, 1:N ] - array( 1, N ) %*% t(E_R) - E_R %*% t( array( 1, N ) ) - matrix( 1, N, N ); +E_ZZ = E_YY[ (N+1):nrow(E_YY), (N+1):ncol(E_YY) ] - array( 1, K ) %*% t(E_Z) - E_Z %*% t(array( 1, K )) - matrix( 1, K, K ); +E_RZ = E_YY[ 1:N, (N+1):ncol(E_YY) ] - array( 1, N ) %*% t(E_Z) - E_R %*% t(array(1,K)) - matrix( 1, N, K ); +SigmaZ = E_ZZ - E_Z %*% t(E_Z); +SigmaR = E_RR - E_R %*% t(E_R); +SigmaRZ = E_RZ - E_R %*% t(E_Z); + +################################################################################################################## +### Generate Monte Carlo simulations +sims = rmvnorm( nSim, mu * tau, covJointXF * tau ); +X = sims[ , 1:N ]; +F = sims[ , (N+1):ncol(sims) ]; +R = exp(X) - 1; +Z = exp(F) - 1; +# enforce Z sample to be zero-mean, equivalent to muF = -diag(SigmaF)/2 +if( useZeroMeanLinearFactors ) +{ + Z = Z - repmat( apply( Z, 2, mean ), nSim, 1 ); +} + +################################################################################################################## +### Compute sample estimates +E_R_hat = matrix( apply( R, 2, mean) ); +E_Z_hat = matrix( apply( Z, 2, mean) ); +SigmaR_hat = ( dim(R)[1] - 1 ) / dim(R)[1] * cov( R ); +SigmaZ_hat = ( dim(Z)[1] - 1 ) / dim(Z)[1] * cov( Z ); + +################################################################################################################## +### Compute simulation errors +errSigmaR = norm( SigmaR - SigmaR_hat, "F" ) / norm( SigmaR, "F" ); +printf( "Simulation error in sample cov(R) as a percentage on true cov(R) = %0.1f \n", errSigmaR * 100 ); +errSigmaZ = norm( SigmaZ - SigmaZ_hat, "F" ) / norm( SigmaZ, "F" ); +printf( "Simulation error in sample cov(Z) as a percentage on true cov(Z) = %0.1f \n", errSigmaZ * 100 ); + +################################################################################################################## +### Compute OLS loadings for the linear return model +B = E_RZ %*% solve( E_ZZ ); +ZZ = t(Z) %*% Z; +B_hat = t(R) %*% Z %*% solve(ZZ); +errB = norm( B - B_hat, "F" ) / norm( B, "F" ); +printf( "Simulation error in sample OLS loadings as a percentage on true OLS loadings = %0.1f \n", errB * 100 ); + +U = R - Z %*% t( B_hat ); +Corr = cor(cbind( U, Z )); + +Corr_U = Corr[ 1:N, 1:N ]; +Corr_UZ = Corr[ 1:N, (N+1):(N+K) ]; + +SigmaU_hat = ( dim(U)[1] - 1 ) / dim(U)[1] * cov( U ); +BSBplusSu = B_hat %*% SigmaZ_hat %*% t( B_hat ) + SigmaU_hat; +errSigmaR_model1 = norm( SigmaR_hat - BSBplusSu, "F" ) / norm( SigmaR_hat, "F" ); +printf( "Sigma_R -( B * Sigma_Z * t(B) + Sigma_U) = %0.1f \n", errSigmaR_model1 * 100 ); + From noreply at r-forge.r-project.org Tue Aug 6 11:49:20 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 6 Aug 2013 11:49:20 +0200 (CEST) Subject: [Returnanalytics-commits] r2728 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130806094920.8E7C41812A7@r-forge.r-project.org> Author: pulkit Date: 2013-08-06 11:49:20 +0200 (Tue, 06 Aug 2013) New Revision: 2728 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R Log: Average and Max drawdown for alpha and beta drawdown Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-06 09:34:38 UTC (rev 2727) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-06 09:49:20 UTC (rev 2728) @@ -30,7 +30,8 @@ #'@param p confidence level for calculation ,default(p=0.95) #'@param weights portfolio weighting vector, default NULL, see Details #' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE -#' @param invert TRUE/FALSE whether to invert the drawdown measure. see Details. +#' @param type The type of BetaDrawdown if specified alpha then the alpha value given is taken (default 0.95). If "average" then +#' alpha = 0 and if "max" then alpha = 1 is taken. #'@param \dots any passthru variable. #' #'@references @@ -42,7 +43,7 @@ #' #'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 -BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,...){ +BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ # DESCRIPTION: # @@ -61,6 +62,13 @@ columnnames = colnames(R) columns = ncol(R) drawdowns_m = Drawdowns(Rm) + type = type[1] + if(type=="average"){ + p = 0 + } + if(type == "max"){ + p = 1 + } if(!is.null(weights)){ x = Returns.portfolio(R,weights) } @@ -116,3 +124,5 @@ return(beta) } + + Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R 2013-08-06 09:34:38 UTC (rev 2727) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R 2013-08-06 09:49:20 UTC (rev 2728) @@ -22,7 +22,9 @@ #'@param p confidence level for calculation ,default(p=0.95) #'@param weights portfolio weighting vector, default NULL, see Details #' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE +#' @param type The type of BetaDrawdown if specified alpha then the alpha value given is taken (default 0.95). If "average" then alpha = 0 and if "max" then alpha = 1 is taken. #'@param \dots any passthru variable +#' #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model #'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University @@ -30,24 +32,35 @@ #' #'@examples #' +#'AlphaDrawdown(edhec[,1],edhec[,2]) ## expected value : 0.5141929 #' -AlphaDrawdown<-function(R,Rm,p=0.95,weights = NULL,geometric = TRUE,...){ +#'AlphaDrawdown(edhec[,1],edhec[,2],type="max") ## expected value : 0.8983177 +#' +#'AlphaDrawdown(edhec[,1],edhec[,2],type="average") ## expected value : 1.692592 +#'@export + + +AlphaDrawdown<-function(R,Rm,p=0.95,weights = NULL,geometric = TRUE,type=c("alpha","average","max"),...){ # DESCRIPTION: # This function calculates the drawdown alpha given the return series # and the optimal return series # # INPUTS: - # The return series of the portfolio , the return series of the optimal portfolio - # The confidence level, the weights and the type of cumulative returns + # The return series of the portfolio , the return series of the optimal + # portfolio. The confidence level, the weights and the type of cumulative + # returns. # OUTPUT: # The drawdown alpha is given as the output - # ERROR HANDLING !! + # TODO ERROR HANDLING + if(ncol(R) != ncol(Rm)){ + stop("The number of columns in R and Rm should be equal") + } x = checkData(R) xm = checkData(Rm) - beta = BetaDrawdown(R,Rm,p = p,weights=weights,geometric=geometric,...) + beta = BetaDrawdown(R,Rm,p = p,weights=weights,geometric=geometric,type=type,...) if(!is.null(weights)){ x = Returns.portfolio(R,weights) } From noreply at r-forge.r-project.org Tue Aug 6 12:41:54 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 6 Aug 2013 12:41:54 +0200 (CEST) Subject: [Returnanalytics-commits] r2729 - pkg/Meucci/demo Message-ID: <20130806104154.7310E185AE7@r-forge.r-project.org> Author: xavierv Date: 2013-08-06 12:41:54 +0200 (Tue, 06 Aug 2013) New Revision: 2729 Added: pkg/Meucci/demo/S_HedgeOptions.R Log: - added S_HedgeOptions demo script from chapter 3 Added: pkg/Meucci/demo/S_HedgeOptions.R =================================================================== --- pkg/Meucci/demo/S_HedgeOptions.R (rev 0) +++ pkg/Meucci/demo/S_HedgeOptions.R 2013-08-06 10:41:54 UTC (rev 2729) @@ -0,0 +1,109 @@ +################################################################################################################## +### This script compares hedging based on Black-Scholes deltas with Factors on Demand hedging +### == Chapter 3 == +################################################################################################################## + +################################################################################################################## +### Load data +load( "../data/implVol.Rda" ); + +################################################################################################################## +### Inputs +tau_tilde = 5; # estimation step (days) +tau = 40; # time to horizon (days) +Time2Mats = cbind( 100, 150, 200, 250, 300 ); # current time to maturity of call options in days +Strikes = cbind( 850, 880, 910, 940, 970 ); # strikes of call options, same dimension as Time2Mat + +r_free = 0.04; # risk-free rate +J = 10000; # number of simulations + +################################################################################################################## +### Underlying and volatility surface +numCalls = length( Time2Mats ); +timeLength = length( implVol$spot ); +numSurfPoints = length( implVol$days2Maturity ) * length( implVol$moneyness ); + +################################################################################################################## +### Estimate invariant distribution assuming normality +### variables in X are changes in log(spot) and changes in log(imp.vol) +### evaluated at the 'numSurfPoints' points on the vol surface (vectorized). +X = matrix( 0, timeLength-1, numSurfPoints + 1 ); +# log-changes of underlying spot +X[ , 1 ] = diff( log( implVol$spot ) ); + +# log-changes of implied vol for different maturities +impVolSeries = matrix( implVol$impVol, timeLength, numSurfPoints); +for( i in 1 : numSurfPoints ) +{ + X[ , i + 1 ] = diff( log( impVolSeries[ , i ] ) ); +} +muX = apply(X, 2, mean ); +SigmaX = ( dim(X)[1]-1)/dim(X)[1] * cov( X ); + +################################################################################################################## +### Project distribution to investment horizon +muX = muX * tau / tau_tilde; +SigmaX = SigmaX * tau / tau_tilde; + +################################################################################################################## +### Linearly interpolate the vol surface at the current time to obtain +### implied vol for the given calls today, and price the calls +spot_T = implVol$spot[ length(implVol$spot) ]; +volSurf_T = drop( implVol$impVol[ dim( implVol$impVol )[1], , ] ); +time2Mat_T = Time2Mats; +moneyness_T = Strikes / spot_T; +impVol_T = t(InterExtrapolate( volSurf_T,cbind( t(time2Mat_T), t( moneyness_T)), list( implVol$days2Maturity, implVol$moneyness ) ) ); # function by John D'Errico +callPrice_T = BlackScholesCallPrice( spot_T, Strikes, r_free, impVol_T, Time2Mats/252 )$c; + +################################################################################################################## +### Generate simulations at horizon +X_ = rmvnorm( J, muX, SigmaX ); + +################################################################################################################## +### Interpolate vol surface at horizon for the given calls +spot_ = spot_T * exp(X_[ , 1 ]); +impVol_ = matrix( 0, J, numCalls); +for( j in 1:J ) +{ + volSurf = volSurf_T *exp( matrix( X_[ j, -1 ], length( implVol$days2Maturity ), length( implVol$moneyness ) ) ); + time2Mat_ = Time2Mats - tau; + moneyness_ = Strikes / spot_[ j ]; + impVol_[ j, ] = t( InterExtrapolate( volSurf, cbind( t( time2Mat_ ), t( moneyness_) ), list( implVol$days2Maturity, implVol$moneyness ) ) ); # function by John D'Errico +} + +################################################################################################################## +### Price the calls +callPrice_ = matrix( 0, J, numCalls ); +for( i in 1 : numCalls ) +{ + callPrice_[ , i ] = BlackScholesCallPrice( spot_, Strikes[ i ], r_free, impVol_[ , i ], time2Mat_[ i ] / 252 )$c; +} +# linear returns of the calls +Rc = callPrice_ / repmat( callPrice_T, J, 1) - 1 ; +# linear returns of the underlying +Rsp = spot_ / spot_T - 1; + +################################################################################################################## +# Compute the OLS linear (affine) model: Rc = a + b*Rsp + U +Z = cbind( array( 1, J), Rsp ); +olsLoadings = ( t(Rc) %*% Z) %*% solve( t(Z) %*% Z ); +a = olsLoadings[ , 1 ]; +b = olsLoadings[ , 2 ]; + +################################################################################################################## +# Compute Black-Scholes delta and cash held in replicating portfolio +BSCP = BlackScholesCallPrice( spot_T, Strikes, r_free, impVol_T, Time2Mats / 252 ); +a_bs = BSCP$cash / BSCP$c * r_free * tau / 252; +b_bs = t( BSCP$delta / BSCP$c * spot_T); + +printf( "OLS: a = [ %s\t]\n", sprintf("\t%7.4f", t(a) ) )); +printf( "B-S: a = [ %s\t]\n", sprintf("\t%7.4f", t(a_bs) ) ); +printf( "OLS: b = [ %s\t]\n", sprintf("\t%7.4f", t(b) ) ); +printf( "B-S: b = [ %s\t]\n", sprintf("\t%7.4f", t(b_bs) ) ); + +for( i in 1 : numCalls ) +{ + dev.new(); + plot( Rsp, Rc[ , i ], xlab = "return underlying" , ylab = "return call option"); +} + From noreply at r-forge.r-project.org Tue Aug 6 12:44:11 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 6 Aug 2013 12:44:11 +0200 (CEST) Subject: [Returnanalytics-commits] r2730 - pkg/Meucci/demo Message-ID: <20130806104411.69813185AE7@r-forge.r-project.org> Author: xavierv Date: 2013-08-06 12:44:11 +0200 (Tue, 06 Aug 2013) New Revision: 2730 Modified: pkg/Meucci/demo/S_HedgeOptions.R Log: - fixed documentation for S_HedgeOptions demo script Modified: pkg/Meucci/demo/S_HedgeOptions.R =================================================================== --- pkg/Meucci/demo/S_HedgeOptions.R 2013-08-06 10:41:54 UTC (rev 2729) +++ pkg/Meucci/demo/S_HedgeOptions.R 2013-08-06 10:44:11 UTC (rev 2730) @@ -1,5 +1,14 @@ +#' This script compares hedging based on Black-Scholes deltas with Factors on Demand hedging, as described in +#' A. Meucci "Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_HedgeOptions.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + ################################################################################################################## -### This script compares hedging based on Black-Scholes deltas with Factors on Demand hedging +### ### == Chapter 3 == ################################################################################################################## From noreply at r-forge.r-project.org Tue Aug 6 18:35:54 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 6 Aug 2013 18:35:54 +0200 (CEST) Subject: [Returnanalytics-commits] r2731 - in pkg/Meucci: data demo Message-ID: <20130806163554.C7E4C185AEE@r-forge.r-project.org> Author: xavierv Date: 2013-08-06 18:35:54 +0200 (Tue, 06 Aug 2013) New Revision: 2731 Added: pkg/Meucci/data/securitiesIndustryClassification.Rda pkg/Meucci/data/securitiesTS.Rda pkg/Meucci/demo/S_CrossSectionIndustries.R Log: - added S_CrossSectionIndustries demo script from chapter 3 and two data files Added: pkg/Meucci/data/securitiesIndustryClassification.Rda =================================================================== (Binary files differ) Property changes on: pkg/Meucci/data/securitiesIndustryClassification.Rda ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/Meucci/data/securitiesTS.Rda =================================================================== (Binary files differ) Property changes on: pkg/Meucci/data/securitiesTS.Rda ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/Meucci/demo/S_CrossSectionIndustries.R =================================================================== --- pkg/Meucci/demo/S_CrossSectionIndustries.R (rev 0) +++ pkg/Meucci/demo/S_CrossSectionIndustries.R 2013-08-06 16:35:54 UTC (rev 2731) @@ -0,0 +1,72 @@ +#' This script fits a cross-sectional linear factor model creating industry factors, as described in A. Meucci, +#' "Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_CrossSectionIndustries.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Load data +# loads weekly stock returns X and indices stock returns F +load(" ../data/securitiesTS.Rda"); +Data_Securities = securitiesTS$data[ , -1 ]; # 1st column is date + +load("../data/securitiesIndustryClassification.Rda"); +Securities_IndustryClassification = securitiesIndustryClassification$data; + +################################################################################################################## +### Estimation +# linear returns for stocks +X = diff( Data_Securities ) / Data_Securities[ -nrow(Data_Securities), ]; + +T = dim(X)[1]; +N = dim(X)[2]; +B = Securities_IndustryClassification[ 1:N, ]; +K = dim(B)[ 2 ]; + +# compute sample estimates +E_X = matrix( apply(X, 2, mean ) ); +Sigma_X = (dim(X)[1]-1)/dim(X)[1] * cov(X); + +# The optimal loadings turn out to be the standard multivariate weighted-OLS. +Phi = diag(1 / diag( Sigma_X ), length(diag( Sigma_X ) ) ); +tmp = t(B) %*% Phi %*% B ; +F = t( diag( diag(tmp) ^( -1 ), dim(tmp) ) %*% t(B) %*% Phi %*% t(X)); + +# compute intercept a and residual U +E_F = matrix( apply( F, 2, mean ) ); +a = E_X - B %*% E_F; +A_ = repmat( t(a), T, 1); +U = X - A_ - F %*% t(B); + +################################################################################################################## +### Residual analysis +M = cbind( U, F ); +SigmaJoint_UF = (dim(M)[1]-1)/dim(M)[1] * cov( M ); +CorrJoint_UF = cov2cor(SigmaJoint_UF); +Sigma_F = (dim(F)[1]-1)/dim(F)[1] * cov(F); +Corr_F = cov2cor( Sigma_F ); +Corr_F = tril(Corr_F, -1); +Corr_F = Corr_F[ Corr_F != 0 ]; + +Corr_U = tril(CorrJoint_UF[ 1:N, 1:N ], -1); +Corr_U = Corr_U[ Corr_U != 0 ]; +mean_Corr_U = mean( abs(Corr_U)); +max_Corr_U = max( abs(Corr_U)); +disp(mean_Corr_U); +disp(max_Corr_U); + +dev.new(); +hist(Corr_U, 100); + +Corr_UF = CorrJoint_UF[ 1:N, (N+1):(N+K) ]; +mean_Corr_UF = mean( abs(Corr_UF ) ); +max_Corr_UF = max( abs(Corr_UF ) ); +disp(mean_Corr_U); +disp(max_Corr_U); + +dev.new(); +hist(Corr_UF, 100); + From noreply at r-forge.r-project.org Tue Aug 6 21:15:24 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 6 Aug 2013 21:15:24 +0200 (CEST) Subject: [Returnanalytics-commits] r2732 - pkg/PortfolioAnalytics/inst/doc Message-ID: <20130806191524.59E51180FDD@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-06 21:15:24 +0200 (Tue, 06 Aug 2013) New Revision: 2732 Modified: pkg/PortfolioAnalytics/inst/doc/optimization-overview.Snw Log: modifying the optimization-overview so that th R code chunks work with the v1 interface and the package will build on R-forge Modified: pkg/PortfolioAnalytics/inst/doc/optimization-overview.Snw =================================================================== --- pkg/PortfolioAnalytics/inst/doc/optimization-overview.Snw 2013-08-06 16:35:54 UTC (rev 2731) +++ pkg/PortfolioAnalytics/inst/doc/optimization-overview.Snw 2013-08-06 19:15:24 UTC (rev 2732) @@ -1,361 +1,361 @@ -\documentclass[a4paper]{article} -\usepackage[round]{natbib} -\usepackage{bm} -\usepackage{verbatim} -\usepackage[latin1]{inputenc} -% \VignetteIndexEntry{Portfolio Optimization with CVaR budgets in PortfolioAnalytics} -\bibliographystyle{abbrvnat} - -\usepackage{url} - -\let\proglang=\textsf -\newcommand{\pkg}[1]{{\fontseries{b}\selectfont #1}} -\newcommand{\R}[1]{{\fontseries{b}\selectfont #1}} -\newcommand{\email}[1]{\href{mailto:#1}{\normalfont\texttt{#1}}} -\newcommand{\E}{\mathsf{E}} -\newcommand{\VAR}{\mathsf{VAR}} -\newcommand{\COV}{\mathsf{COV}} -\newcommand{\Prob}{\mathsf{P}} - -\renewcommand{\topfraction}{0.85} -\renewcommand{\textfraction}{0.1} -\renewcommand{\baselinestretch}{1.5} -\setlength{\textwidth}{15cm} \setlength{\textheight}{22cm} \topmargin-1cm \evensidemargin0.5cm \oddsidemargin0.5cm - -\usepackage[latin1]{inputenc} -% or whatever - -\usepackage{lmodern} -\usepackage[T1]{fontenc} -% Or whatever. Note that the encoding and the font should match. If T1 -% does not look nice, try deleting the line with the fontenc. - -\begin{document} - -\title{Vignette: Portfolio Optimization with CVaR budgets\\ -in PortfolioAnalytics} -\author{Kris Boudt, Peter Carl and Brian Peterson } -\date{June 1, 2010} - -\maketitle -\tableofcontents - - -\bigskip - -\section{General information} - -Risk budgets are a central tool to estimate and manage the portfolio risk allocation. They decompose total portfolio risk into the risk contribution of each position. \citet{ BoudtCarlPeterson2010} propose several portfolio allocation strategies that use an appropriate transformation of the portfolio Conditional Value at Risk (CVaR) budget as an objective or constraint in the portfolio optimization problem. This document explains how risk allocation optimized portfolios can be obtained under general constraints in the \verb"PortfolioAnalytics" package of \citet{PortAnalytics}. - -\verb"PortfolioAnalytics" is designed to provide numerical solutions for portfolio problems with complex constraints and objective sets comprised of any R function. It can e.g.~construct portfolios that minimize a risk objective with (possibly non-linear) per-asset constraints on returns and drawdowns \citep{CarlPetersonBoudt2010}. The generality of possible constraints and objectives is a distinctive characteristic of the package with respect to RMetrics \verb"fPortfolio" of \citet{fPortfolioBook}. For standard Markowitz optimization problems, use of \verb"fPortfolio" rather than \verb"PortfolioAnalytics" is recommended. - -\verb"PortfolioAnalytics" solves the following type of problem -\begin{equation} \min_w g(w) \ \ s.t. \ \ -\left\{ \begin{array}{l} h_1(w)\leq 0 \\ \vdots \\ h_q(w)\leq 0. \end{array} \right. \label{optimproblem}\end{equation} \verb"PortfolioAnalytics" first merges the objective function and constraints into a penalty augmented objective function -\begin{equation} L(w) = g(w) + \mbox{penalty}\sum_{i=1}^q \lambda_i \max(h_i(w),0), \label{eq:constrainedobj} \end{equation} -where $\lambda_i$ is a multiplier to tune the relative importance of the constraints. The default values of penalty and $\lambda_i$ (called \verb"multiplier" in \verb"PortfolioAnalytics") are 10000 and 1, respectively. - -The minimum of this function is found through the \emph{Differential Evolution} (DE) algorithm of \citet{StornPrice1997} and ported to R by \citet{MullenArdiaGilWindoverCline2009}. DE is known for remarkable performance regarding continuous numerical problems \citep{PriceStornLampinen2006}. It has recently been advocated for optimizing portfolios under non-convex settings by \citet{Ardia2010} and \citet{Yollin2009}, among others. We use the R implementation of DE in the \verb"DEoptim" package of \citet{DEoptim}. - -The latest version of the \verb"PortfolioAnalytics" package can be downloaded from R-forge through the following command: -\begin{verbatim} -install.packages("PortfolioAnalytics", repos="http://R-Forge.R-project.org") -\end{verbatim} - -Its principal functions are: -\begin{itemize} -\item \verb"constraint(assets,min,max,min_sum,max_sum)": the portfolio optimization specification starts with specifying the shape of the weight vector through the function \verb"constraint". The weights have to be between \verb"min} and \verb"max" and their sum between \verb"min_sum" and \verb"max_sum". The first argument \verb"assets" is either a number indicating the number of portfolio assets or a vector holding the names of the assets. - -\item \verb"add.objective(constraints, type, name)": \verb"constraints" is a list holding the objective to be minimized and the constraints. New elements to this list are added by the function \verb"add.objective". Many common risk budget objectives and constraints are prespecified and can be identified by specifying the \verb"type" and \verb"name". - - -\item \verb"constrained_objective(w, R, constraints)": given the portfolio weight and return data, it evaluates the penalty augmented objective function in (\ref{eq:constrainedobj}). - -\item \verb"optimize.portfolio(R,constraints)": this function returns the portfolio weight that solves the problem in (\ref{optimproblem}). {\it R} is the multivariate return series of the portfolio components. - -\item \verb"optimize.portfolio.rebalancing(R,constraints,rebalance_on,trailing_periods": this function solves the multiperiod optimization problem. It returns for each rebalancing period the optimal weights and allows the estimation sample to be either from inception or a moving window. - -\end{itemize} - -Next we illustrate these functions on monthly return data for bond, US equity, international equity and commodity indices, which are the first 4 series -in the dataset \verb"indexes". The first step is to load the package \verb"PortfolioAnalytics" and the dataset. An important first note is that some of the functions (especially \verb" optimize.portfolio.rebalancing") requires the dataset to be a \verb"xts" object \citep{xts}. - - -<>= -options(width=80) -@ - -<>=| -library(PortfolioAnalytics) -#source("constrained_objective.R") -data(indexes) -class(indexes) -indexes <- indexes[,1:4] -head(indexes,2) -tail(indexes,2) -@ - -In what follows, we first illustrate the construction of the penalty augmented objective function. Then we present the code for solving the optimization problem. - -\section{Setting of the objective function} - -\subsection{Weight constraints} - -<>=| -Wcons <- constraint( assets = colnames(indexes[,1:4]) ,min = rep(0,4), max=rep(1,4), min_sum=1,max_sum=1 ) -@ - -Given the weight constraints, we can call the value of the function to be minimized. We consider the case of no violation and a case of violation. By default, \verb"normalize=TRUE" which means that if the sum of weights exceeds \verb"max_sum", the weight vector is normalized by multiplying it with \verb"sum(weights)/max_sum" such that the weights evaluated in the objective function satisfy the \verb"max_sum" constraint. -<>=| -constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , constraints = Wcons) -constrained_objective( w = rep(1/3,4) , R = indexes[,1:4] , constraints = Wcons) -constrained_objective( w = rep(1/3,4) , R = indexes[,1:4] , constraints = Wcons, normalize=FALSE) -@ - -The latter value can be recalculated as penalty times the weight violation, that is: $10000 \times 1/3.$ - -\subsection{Minimum CVaR objective function} - -Suppose now we want to find the portfolio that minimizes the 95\% portfolio CVaR subject to the weight constraints listed above. - -<>=| -ObjSpec = add.objective( constraints = Wcons , type="risk",name="CVaR", -arguments=list(p=0.95), enabled=TRUE) -@ - -The value of the objective function is: -<>=| -constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ -This is the CVaR of the equal-weight portfolio as computed by the function \verb"ES" in the \verb"PerformanceAnalytics" package of \citet{ Carl2007} -<>=| -library(PerformanceAnalytics) -out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95, portfolio_method="component") -out$MES -@ -All arguments in the function \verb"ES" can be passed on through \verb"arguments". E.g. to reduce the impact of extremes on the portfolio results, it is recommended to winsorize the data using the option clean="boudt". - -<>=| -out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", portfolio_method="component") -out$MES -@ - - - -For the formulation of the objective function, this implies setting: -<>=| -ObjSpec = add.objective( constraints = Wcons , type="risk",name="CVaR", -arguments=list(p=0.95,clean="boudt"), enabled=TRUE) -constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ - -An additional argument that is not available for the moment in \verb"ES" is to estimate the conditional covariance matrix trough -the constant conditional correlation model of \citet{Bollerslev90}. - -For the formulation of the objective function, this implies setting: -<>=| -ObjSpec = add.objective( constraints = Wcons , type="risk",name="CVaR", -arguments=list(p=0.95,clean="boudt"), enabled=TRUE, garch=TRUE) -constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ - -\subsection{Minimum CVaR concentration objective function} - -Add the minimum 95\% CVaR concentration objective to the objective function: -<>=| -ObjSpec = add.objective( constraints = Wcons , type="risk_budget_objective",name="CVaR", -arguments=list(p=0.95,clean="boudt"), min_concentration=TRUE, enabled=TRUE) -@ -The value of the objective function is: -<>=| -constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ -We can verify that this is effectively the largest CVaR contribution of that portfolio as follows: -<>=| -ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", portfolio_method="component") -@ - -\subsection{Risk allocation constraints} - -We see that in the equal-weight portfolio, the international equities and commodities investment -cause more than 30\% of total risk. We could specify as a constraint that no asset can contribute -more than 30\% to total portfolio risk. This involves the construction of the following objective function: - -<>=| -ObjSpec = add.objective( constraints = Wcons , type="risk_budget_objective",name="CVaR", max_prisk = 0.3, -arguments=list(p=0.95,clean="boudt"), enabled=TRUE) -constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ - -This value corresponds to the penalty parameter which has by default the value of 10000 times the exceedances: $ 10000*(0.045775103+0.054685023)\approx 1004.601.$ - -\section{Optimization} - -The penalty augmented objective function is minimized through Differential Evolution. Two parameters are crucial in tuning the optimization: \verb"search_size" and \verb"itermax". The optimization routine -\begin{enumerate} -\item First creates the initial generation of \verb"NP= search_size/itermax" guesses for the optimal value of the parameter vector, using the \verb"random_portfolios" function generating random weights satisfying the weight constraints. -\item Then DE evolves over this population of candidate solutions using alteration and selection operators in order to minimize the objective function. It restarts \verb"itermax" times. -\end{enumerate} It is important that \verb"search_size/itermax" is high enough. It is generally recommended that this ratio is at least ten times the length of the weight vector. For more details on the use of DE strategy in portfolio allocation, we refer the -reader to \citet{Ardia2010}. - -\subsection{Minimum CVaR portfolio under an upper 40\% CVaR allocation constraint} - -The functions needed to obtain the minimum CVaR portfolio under an upper 40\% CVaR allocation constraint are the following: -\begin{verbatim} -> ObjSpec <- constraint(assets = colnames(indexes[,1:4]),min = rep(0,4), -+ max=rep(1,4), min_sum=1,max_sum=1 ) -> ObjSpec <- add.objective( constraints = ObjSpec, type="risk", -+ name="CVaR", arguments=list(p=0.95,clean="boudt"),enabled=TRUE) -> ObjSpec <- add.objective( constraints = ObjSpec, -+ type="risk_budget_objective", name="CVaR", max_prisk = 0.4, -+ arguments=list(p=0.95,clean="boudt"), enabled=TRUE) -> set.seed(1234) -> out = optimize.portfolio(R= indexes[,1:4],constraints=ObjSpec, -+ optimize_method="DEoptim",itermax=10, search_size=2000) -\end{verbatim} -After the call to these functions it starts to explore the feasible space iteratively: -\begin{verbatim} -Iteration: 1 bestvalit: 0.029506 bestmemit: 0.810000 0.126000 0.010000 0.140000 -Iteration: 2 bestvalit: 0.029506 bestmemit: 0.810000 0.126000 0.010000 0.140000 -Iteration: 3 bestvalit: 0.029272 bestmemit: 0.758560 0.079560 0.052800 0.112240 -Iteration: 4 bestvalit: 0.029272 bestmemit: 0.758560 0.079560 0.052800 0.112240 -Iteration: 5 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 -Iteration: 6 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 -Iteration: 7 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 -Iteration: 8 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 -Iteration: 9 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 -Iteration: 10 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 -elapsed time:1.85782111114926 -\end{verbatim} - -If \verb"TRACE=FALSE" the only output in \verb"out" is the weight vector that optimizes the objective function. - -\begin{verbatim} -> out[[1]] - US Bonds US Equities Int'l Equities Commodities - 0.77530240 0.03201150 0.11247491 0.08021119 \end{verbatim} - -If \verb"TRACE=TRUE" additional information is given such as the value of the objective function and the different constraints. - -\subsection{Minimum CVaR concentration portfolio} - -The functions needed to obtain the minimum CVaR concentration portfolio are the following: - -\begin{verbatim} -> ObjSpec <- constraint(assets = colnames(indexes[,1:4]) ,min = rep(0,4), -+ max=rep(1,4), min_sum=1,max_sum=1 ) -> ObjSpec <- add.objective( constraints = ObjSpec, -+ type="risk_budget_objective", name="CVaR", -+ arguments=list(p=0.95,clean="boudt"), -+ min_concentration=TRUE,enabled=TRUE) -> set.seed(1234) -> out = optimize.portfolio(R= indexes[,1:4],constraints=ObjSpec, -+ optimize_method="DEoptim",itermax=50, search_size=5000) -\end{verbatim} -The iterations are as follows: -\begin{verbatim} -Iteration: 1 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 2 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 3 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 4 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 5 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 45 bestvalit: 0.008209 bestmemit: 0.976061 0.151151 0.120500 0.133916 -Iteration: 46 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -Iteration: 47 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -Iteration: 48 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -Iteration: 49 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -Iteration: 50 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -elapsed time:4.1324522222413 -\end{verbatim} -This portfolio has the equal risk contribution characteristic: -\begin{verbatim} -> out[[1]] - US Bonds US Equities Int'l Equities Commodities - 0.70528537 0.11118139 0.08610905 0.09742419 -> ES(indexes[,1:4],weights = out[[1]],p=0.95,clean="boudt", -+ portfolio_method="component") -$MES - [,1] -[1,] 0.03246264 - -$contribution - US Bonds US Equities Int'l Equities Commodities - 0.008169565 0.008121930 0.008003228 0.008167917 - -$pct_contrib_MES - US Bonds US Equities Int'l Equities Commodities - 0.2516605 0.2501931 0.2465366 0.2516098 \end{verbatim} - - - - -\subsection{Dynamic optimization} - -Dynamic rebalancing of the risk budget optimized portfolio is possible through the function \verb"optimize.portfolio.rebalancing". Additional arguments are \verb"rebalance\_on} which indicates the rebalancing frequency (years, quarters, months). The estimation is either done from inception (\verb"trailing\_periods=0") or through moving window estimation, where each window has \verb"trailing_periods" observations. The minimum number of observations in the estimation sample is specified by \verb"training_period". Its default value is 36, which corresponds to three years for monthly data. - -As an example, consider the minimum CVaR concentration portfolio, with estimation from in inception and monthly rebalancing. Since we require a minimum estimation length of total number of observations -1, we can optimize the portfolio only for the last two months. - -\begin{verbatim} -> set.seed(1234) -> out = optimize.portfolio.rebalancing(R= indexes,constraints=ObjSpec, rebalance_on ="months", -+ optimize_method="DEoptim",itermax=50, search_size=5000, training_period = nrow(indexes)-1 ) -\end{verbatim} - -For each of the optimization, the iterations are given as intermediate output: -\begin{verbatim} -Iteration: 1 bestvalit: 0.010655 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 2 bestvalit: 0.010655 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 49 bestvalit: 0.008207 bestmemit: 0.787525 0.124897 0.098001 0.108258 -Iteration: 50 bestvalit: 0.008195 bestmemit: 0.774088 0.122219 0.095973 0.104338 -elapsed time:4.20546416666773 -Iteration: 1 bestvalit: 0.011006 bestmemit: 0.770000 0.050000 0.090000 0.090000 -Iteration: 2 bestvalit: 0.010559 bestmemit: 0.498333 0.010000 0.070000 0.080000 -Iteration: 49 bestvalit: 0.008267 bestmemit: 0.828663 0.126173 0.100836 0.114794 -Iteration: 50 bestvalit: 0.008267 bestmemit: 0.828663 0.126173 0.100836 0.114794 -elapsed time:4.1060591666566 -overall elapsed time:8.31152777777778 -\end{verbatim} -The output is a list holding for each rebalancing period the output of the optimization, such as portfolio weights. -\begin{verbatim} -> out[[1]]$weights - US Bonds US Equities Int'l Equities Commodities - 0.70588695 0.11145087 0.08751686 0.09514531 -> out[[2]]$weights - US Bonds US Equities Int'l Equities Commodities - 0.70797640 0.10779728 0.08615059 0.09807574 -\end{verbatim} -But also the value of the objective function: -\begin{verbatim} -> out[[1]]$out -[1] 0.008195072 -> out[[2]]$out -[1] 0.008266844 -\end{verbatim} -The first and last observation from the estimation sample: -\begin{verbatim} -> out[[1]]$data_summary -$first - US Bonds US Equities Int'l Equities Commodities -1980-01-31 -0.0272 0.061 0.0462 0.0568 - -$last - US Bonds US Equities Int'l Equities Commodities -2009-11-30 0.0134 0.0566 0.0199 0.015 - -> out[[2]]$data_summary -$first - US Bonds US Equities Int'l Equities Commodities -1980-01-31 -0.0272 0.061 0.0462 0.0568 - -$last - US Bonds US Equities Int'l Equities Commodities -2009-12-31 -0.0175 0.0189 0.0143 0.0086 -\end{verbatim} - -Of course, DE is a stochastic optimizaer and typically will only find a near-optimal solution that depends on the seed. The function \verb"optimize.portfolio.parallel" in \verb"PortfolioAnalytics" allows to run an arbitrary number of portfolio sets in parallel in order to develop "confidence bands" around your solution. It is based on Revolution's \verb"foreach" package \citep{foreach}. - -\bibliography{PA} - - -\end{document} - +\documentclass[a4paper]{article} +\usepackage[round]{natbib} +\usepackage{bm} +\usepackage{verbatim} +\usepackage[latin1]{inputenc} +% \VignetteIndexEntry{Portfolio Optimization with CVaR budgets in PortfolioAnalytics} +\bibliographystyle{abbrvnat} + +\usepackage{url} + +\let\proglang=\textsf +\newcommand{\pkg}[1]{{\fontseries{b}\selectfont #1}} +\newcommand{\R}[1]{{\fontseries{b}\selectfont #1}} +\newcommand{\email}[1]{\href{mailto:#1}{\normalfont\texttt{#1}}} +\newcommand{\E}{\mathsf{E}} +\newcommand{\VAR}{\mathsf{VAR}} +\newcommand{\COV}{\mathsf{COV}} +\newcommand{\Prob}{\mathsf{P}} + +\renewcommand{\topfraction}{0.85} +\renewcommand{\textfraction}{0.1} +\renewcommand{\baselinestretch}{1.5} +\setlength{\textwidth}{15cm} \setlength{\textheight}{22cm} \topmargin-1cm \evensidemargin0.5cm \oddsidemargin0.5cm + +\usepackage[latin1]{inputenc} +% or whatever + +\usepackage{lmodern} +\usepackage[T1]{fontenc} +% Or whatever. Note that the encoding and the font should match. If T1 +% does not look nice, try deleting the line with the fontenc. + +\begin{document} + +\title{Vignette: Portfolio Optimization with CVaR budgets\\ +in PortfolioAnalytics} +\author{Kris Boudt, Peter Carl and Brian Peterson } +\date{June 1, 2010} + +\maketitle +\tableofcontents + + +\bigskip + +\section{General information} + +Risk budgets are a central tool to estimate and manage the portfolio risk allocation. They decompose total portfolio risk into the risk contribution of each position. \citet{ BoudtCarlPeterson2010} propose several portfolio allocation strategies that use an appropriate transformation of the portfolio Conditional Value at Risk (CVaR) budget as an objective or constraint in the portfolio optimization problem. This document explains how risk allocation optimized portfolios can be obtained under general constraints in the \verb"PortfolioAnalytics" package of \citet{PortAnalytics}. + +\verb"PortfolioAnalytics" is designed to provide numerical solutions for portfolio problems with complex constraints and objective sets comprised of any R function. It can e.g.~construct portfolios that minimize a risk objective with (possibly non-linear) per-asset constraints on returns and drawdowns \citep{CarlPetersonBoudt2010}. The generality of possible constraints and objectives is a distinctive characteristic of the package with respect to RMetrics \verb"fPortfolio" of \citet{fPortfolioBook}. For standard Markowitz optimization problems, use of \verb"fPortfolio" rather than \verb"PortfolioAnalytics" is recommended. + +\verb"PortfolioAnalytics" solves the following type of problem +\begin{equation} \min_w g(w) \ \ s.t. \ \ +\left\{ \begin{array}{l} h_1(w)\leq 0 \\ \vdots \\ h_q(w)\leq 0. \end{array} \right. \label{optimproblem}\end{equation} \verb"PortfolioAnalytics" first merges the objective function and constraints into a penalty augmented objective function +\begin{equation} L(w) = g(w) + \mbox{penalty}\sum_{i=1}^q \lambda_i \max(h_i(w),0), \label{eq:constrainedobj} \end{equation} +where $\lambda_i$ is a multiplier to tune the relative importance of the constraints. The default values of penalty and $\lambda_i$ (called \verb"multiplier" in \verb"PortfolioAnalytics") are 10000 and 1, respectively. + +The minimum of this function is found through the \emph{Differential Evolution} (DE) algorithm of \citet{StornPrice1997} and ported to R by \citet{MullenArdiaGilWindoverCline2009}. DE is known for remarkable performance regarding continuous numerical problems \citep{PriceStornLampinen2006}. It has recently been advocated for optimizing portfolios under non-convex settings by \citet{Ardia2010} and \citet{Yollin2009}, among others. We use the R implementation of DE in the \verb"DEoptim" package of \citet{DEoptim}. + +The latest version of the \verb"PortfolioAnalytics" package can be downloaded from R-forge through the following command: +\begin{verbatim} +install.packages("PortfolioAnalytics", repos="http://R-Forge.R-project.org") +\end{verbatim} + +Its principal functions are: +\begin{itemize} +\item \verb"constraint(assets,min,max,min_sum,max_sum)": the portfolio optimization specification starts with specifying the shape of the weight vector through the function \verb"constraint". The weights have to be between \verb"min} and \verb"max" and their sum between \verb"min_sum" and \verb"max_sum". The first argument \verb"assets" is either a number indicating the number of portfolio assets or a vector holding the names of the assets. + +\item \verb"add.objective(constraints, type, name)": \verb"constraints" is a list holding the objective to be minimized and the constraints. New elements to this list are added by the function \verb"add.objective". Many common risk budget objectives and constraints are prespecified and can be identified by specifying the \verb"type" and \verb"name". + + +\item \verb"constrained_objective(w, R, constraints)": given the portfolio weight and return data, it evaluates the penalty augmented objective function in (\ref{eq:constrainedobj}). + +\item \verb"optimize.portfolio(R,constraints)": this function returns the portfolio weight that solves the problem in (\ref{optimproblem}). {\it R} is the multivariate return series of the portfolio components. + +\item \verb"optimize.portfolio.rebalancing(R,constraints,rebalance_on,trailing_periods": this function solves the multiperiod optimization problem. It returns for each rebalancing period the optimal weights and allows the estimation sample to be either from inception or a moving window. + +\end{itemize} + +Next we illustrate these functions on monthly return data for bond, US equity, international equity and commodity indices, which are the first 4 series +in the dataset \verb"indexes". The first step is to load the package \verb"PortfolioAnalytics" and the dataset. An important first note is that some of the functions (especially \verb" optimize.portfolio.rebalancing") requires the dataset to be a \verb"xts" object \citep{xts}. + + +<>= +options(width=80) +@ + +<>=| +library(PortfolioAnalytics) +#source("constrained_objective.R") +data(indexes) +class(indexes) +indexes <- indexes[,1:4] +head(indexes,2) +tail(indexes,2) +@ + +In what follows, we first illustrate the construction of the penalty augmented objective function. Then we present the code for solving the optimization problem. + +\section{Setting of the objective function} + +\subsection{Weight constraints} + +<>=| +Wcons <- constraint( assets = colnames(indexes[,1:4]) ,min = rep(0,4), max=rep(1,4), min_sum=1,max_sum=1 ) +@ + +Given the weight constraints, we can call the value of the function to be minimized. We consider the case of no violation and a case of violation. By default, \verb"normalize=TRUE" which means that if the sum of weights exceeds \verb"max_sum", the weight vector is normalized by multiplying it with \verb"sum(weights)/max_sum" such that the weights evaluated in the objective function satisfy the \verb"max_sum" constraint. +<>=| +constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = Wcons) +constrained_objective_v1( w = rep(1/3,4) , R = indexes[,1:4] , constraints = Wcons) +constrained_objective_v1( w = rep(1/3,4) , R = indexes[,1:4] , constraints = Wcons, normalize=FALSE) +@ + +The latter value can be recalculated as penalty times the weight violation, that is: $10000 \times 1/3.$ + +\subsection{Minimum CVaR objective function} + +Suppose now we want to find the portfolio that minimizes the 95\% portfolio CVaR subject to the weight constraints listed above. + +<>=| +ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", +arguments=list(p=0.95), enabled=TRUE) +@ + +The value of the objective function is: +<>=| +constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) +@ +This is the CVaR of the equal-weight portfolio as computed by the function \verb"ES" in the \verb"PerformanceAnalytics" package of \citet{ Carl2007} +<>=| +library(PerformanceAnalytics) +out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95, portfolio_method="component") +out$MES +@ +All arguments in the function \verb"ES" can be passed on through \verb"arguments". E.g. to reduce the impact of extremes on the portfolio results, it is recommended to winsorize the data using the option clean="boudt". + +<>=| +out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", portfolio_method="component") +out$MES +@ + + + +For the formulation of the objective function, this implies setting: +<>=| +ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", +arguments=list(p=0.95,clean="boudt"), enabled=TRUE) +constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) +@ + +An additional argument that is not available for the moment in \verb"ES" is to estimate the conditional covariance matrix trough +the constant conditional correlation model of \citet{Bollerslev90}. + +For the formulation of the objective function, this implies setting: +<>=| +ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", +arguments=list(p=0.95,clean="boudt"), enabled=TRUE, garch=TRUE) +constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) +@ + +\subsection{Minimum CVaR concentration objective function} + +Add the minimum 95\% CVaR concentration objective to the objective function: +<>=| +ObjSpec = add.objective_v1( constraints = Wcons , type="risk_budget_objective",name="CVaR", +arguments=list(p=0.95,clean="boudt"), min_concentration=TRUE, enabled=TRUE) +@ +The value of the objective function is: +<>=| +constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) +@ +We can verify that this is effectively the largest CVaR contribution of that portfolio as follows: +<>=| +ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", portfolio_method="component") +@ + +\subsection{Risk allocation constraints} + +We see that in the equal-weight portfolio, the international equities and commodities investment +cause more than 30\% of total risk. We could specify as a constraint that no asset can contribute +more than 30\% to total portfolio risk. This involves the construction of the following objective function: + +<>=| +ObjSpec = add.objective_v1( constraints = Wcons , type="risk_budget_objective",name="CVaR", max_prisk = 0.3, +arguments=list(p=0.95,clean="boudt"), enabled=TRUE) +constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) +@ + +This value corresponds to the penalty parameter which has by default the value of 10000 times the exceedances: $ 10000*(0.045775103+0.054685023)\approx 1004.601.$ + +\section{Optimization} + +The penalty augmented objective function is minimized through Differential Evolution. Two parameters are crucial in tuning the optimization: \verb"search_size" and \verb"itermax". The optimization routine +\begin{enumerate} +\item First creates the initial generation of \verb"NP= search_size/itermax" guesses for the optimal value of the parameter vector, using the \verb"random_portfolios" function generating random weights satisfying the weight constraints. +\item Then DE evolves over this population of candidate solutions using alteration and selection operators in order to minimize the objective function. It restarts \verb"itermax" times. +\end{enumerate} It is important that \verb"search_size/itermax" is high enough. It is generally recommended that this ratio is at least ten times the length of the weight vector. For more details on the use of DE strategy in portfolio allocation, we refer the +reader to \citet{Ardia2010}. + +\subsection{Minimum CVaR portfolio under an upper 40\% CVaR allocation constraint} + +The functions needed to obtain the minimum CVaR portfolio under an upper 40\% CVaR allocation constraint are the following: +\begin{verbatim} +> ObjSpec <- constraint(assets = colnames(indexes[,1:4]),min = rep(0,4), ++ max=rep(1,4), min_sum=1,max_sum=1 ) +> ObjSpec <- add.objective_v1( constraints = ObjSpec, type="risk", ++ name="CVaR", arguments=list(p=0.95,clean="boudt"),enabled=TRUE) +> ObjSpec <- add.objective_v1( constraints = ObjSpec, ++ type="risk_budget_objective", name="CVaR", max_prisk = 0.4, ++ arguments=list(p=0.95,clean="boudt"), enabled=TRUE) +> set.seed(1234) [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2732 From noreply at r-forge.r-project.org Tue Aug 6 23:47:01 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 6 Aug 2013 23:47:01 +0200 (CEST) Subject: [Returnanalytics-commits] r2733 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130806214701.9D9CC1859FA@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-06 23:47:01 +0200 (Tue, 06 Aug 2013) New Revision: 2733 Added: pkg/PortfolioAnalytics/man/chart.Scatter.pso.Rd pkg/PortfolioAnalytics/man/chart.Weights.pso.Rd pkg/PortfolioAnalytics/man/charts.pso.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd Modified: pkg/PortfolioAnalytics/DESCRIPTION pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/extractstats.R Log: adding chart methods for pso optimization methods Modified: pkg/PortfolioAnalytics/DESCRIPTION =================================================================== --- pkg/PortfolioAnalytics/DESCRIPTION 2013-08-06 19:15:24 UTC (rev 2732) +++ pkg/PortfolioAnalytics/DESCRIPTION 2013-08-06 21:47:01 UTC (rev 2733) @@ -48,3 +48,4 @@ 'optFUN.R' 'charts.ROI.R' 'applyFUN.R' + 'charts.PSO.R' Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-06 19:15:24 UTC (rev 2732) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-06 21:47:01 UTC (rev 2733) @@ -6,12 +6,15 @@ export(box_constraint) export(CCCgarch.MM) export(chart.Scatter.DE) +export(chart.Scatter.pso) export(chart.Scatter.ROI) export(chart.Scatter.RP) export(chart.Weights.DE) +export(chart.Weights.pso) export(chart.Weights.ROI) export(chart.Weights.RP) export(charts.DE) +export(charts.pso) export(charts.ROI) export(charts.RP) export(constrained_group_tmp) @@ -53,6 +56,7 @@ export(optimize.portfolio.rebalancing) export(optimize.portfolio) export(plot.optimize.portfolio.DEoptim) +export(plot.optimize.portfolio.pso) export(plot.optimize.portfolio.random) export(plot.optimize.portfolio.ROI) export(plot.optimize.portfolio) Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-06 19:15:24 UTC (rev 2732) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-06 21:47:01 UTC (rev 2733) @@ -294,7 +294,7 @@ result <- cbind(tmpout, psoweights) colnames(result) <- c("out", paste('w',names(object$weights),sep='.')) - rownames(result) <- paste(prefix, "pso.portf", index(tmp), sep=".") + rownames(result) <- paste(prefix, "pso.portf", index(tmpout), sep=".") return(result) } Added: pkg/PortfolioAnalytics/man/chart.Scatter.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Scatter.pso.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Scatter.pso.Rd 2013-08-06 21:47:01 UTC (rev 2733) @@ -0,0 +1,43 @@ +\name{chart.Scatter.pso} +\alias{chart.Scatter.pso} +\title{classic risk return scatter of random portfolios} +\usage{ + chart.Scatter.pso(pso, R, return.col = "mean", + risk.col = "StdDev", ..., element.color = "darkgray", + cex.axis = 0.8, main = "") +} +\arguments{ + \item{pso}{object created by + \code{\link{optimize.portfolio}}} + + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns, used to recalulate the + risk and return metric} + + \item{return.col}{string matching the objective of a + 'return' objective, on vertical axis} + + \item{risk.col}{string matching the objective of a 'risk' + objective, on horizontal axis} + + \item{...}{any other passthru parameters} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot scatter + points} +} +\description{ + \code{return.col} must be the name of a function used to + compute the return metric on the portfolio weights + \code{risk.col} must be the name of a function used to + compute the risk metric on the portfolio weights +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + Added: pkg/PortfolioAnalytics/man/chart.Weights.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.pso.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Weights.pso.Rd 2013-08-06 21:47:01 UTC (rev 2733) @@ -0,0 +1,47 @@ +\name{chart.Weights.pso} +\alias{chart.Weights.pso} +\title{boxplot of the weights in the portfolio} +\usage{ + chart.Weights.pso(pso, neighbors = NULL, ..., + main = "Weights", las = 3, xlab = NULL, cex.lab = 1, + element.color = "darkgray", cex.axis = 0.8) +} +\arguments{ + \item{pso}{object created by + \code{\link{optimize.portfolio}}} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot} + + \item{las}{numeric in \{0,1,2,3\}; the style of axis + labels \describe{ \item{0:}{always parallel to the axis + [\emph{default}],} \item{1:}{always horizontal,} + \item{2:}{always perpendicular to the axis,} + \item{3:}{always vertical.} }} + + \item{xlab}{a title for the x axis: see + \code{\link{title}}} + + \item{cex.lab}{The magnification to be used for x and y + labels relative to the current setting of \code{cex}} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot lines} + + \item{...}{any other passthru parameters} + + \item{main}{an overall title for the plot: see + \code{\link{title}}} +} +\description{ + boxplot of the weights in the portfolio +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + Added: pkg/PortfolioAnalytics/man/charts.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.pso.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/charts.pso.Rd 2013-08-06 21:47:01 UTC (rev 2733) @@ -0,0 +1,50 @@ +\name{charts.pso} +\alias{charts.pso} +\title{scatter and weights chart for portfolios} +\usage{ + charts.pso(pso, R, return.col = "mean", + risk.col = "StdDev", cex.axis = 0.8, + element.color = "darkgray", neighbors = NULL, + main = "PSO.Portfolios", ...) +} +\arguments{ + \item{pso}{object created by + \code{\link{optimize.portfolio}}} + + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns, used to recalulate the + risk and return metric} + + \item{return.col}{string matching the objective of a + 'return' objective, on vertical axis} + + \item{risk.col}{string matching the objective of a 'risk' + objective, on horizontal axis} + + \item{...}{any other passthru parameters} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot scatter + points} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot} + + \item{main}{an overall title for the plot: see + \code{\link{title}}} +} +\description{ + \code{return.col} must be the name of a function used to + compute the return metric on the random portfolio weights + \code{risk.col} must be the name of a function used to + compute the risk metric on the random portfolio weights +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + Added: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd 2013-08-06 21:47:01 UTC (rev 2733) @@ -0,0 +1,50 @@ +\name{plot.optimize.portfolio.pso} +\alias{plot.optimize.portfolio.pso} +\title{scatter and weights chart for portfolios} +\usage{ + plot.optimize.portfolio.pso(pso, R, return.col = "mean", + risk.col = "StdDev", cex.axis = 0.8, + element.color = "darkgray", neighbors = NULL, + main = "PSO.Portfolios", ...) +} +\arguments{ + \item{pso}{object created by + \code{\link{optimize.portfolio}}} + + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns, used to recalulate the + risk and return metric} + + \item{return.col}{string matching the objective of a + 'return' objective, on vertical axis} + + \item{risk.col}{string matching the objective of a 'risk' + objective, on horizontal axis} + + \item{...}{any other passthru parameters} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot scatter + points} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot} + + \item{main}{an overall title for the plot: see + \code{\link{title}}} +} +\description{ + \code{return.col} must be the name of a function used to + compute the return metric on the random portfolio weights + \code{risk.col} must be the name of a function used to + compute the risk metric on the random portfolio weights +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + From noreply at r-forge.r-project.org Wed Aug 7 00:17:15 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 7 Aug 2013 00:17:15 +0200 (CEST) Subject: [Returnanalytics-commits] r2734 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130806221715.48B381859FA@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-07 00:17:14 +0200 (Wed, 07 Aug 2013) New Revision: 2734 Added: pkg/PortfolioAnalytics/R/charts.GenSA.R pkg/PortfolioAnalytics/man/chart.Scatter.GenSA.Rd pkg/PortfolioAnalytics/man/chart.Weights.GenSA.Rd pkg/PortfolioAnalytics/man/charts.GenSA.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd Modified: pkg/PortfolioAnalytics/DESCRIPTION pkg/PortfolioAnalytics/NAMESPACE Log: adding chart methods for GenSA optimization method Modified: pkg/PortfolioAnalytics/DESCRIPTION =================================================================== --- pkg/PortfolioAnalytics/DESCRIPTION 2013-08-06 21:47:01 UTC (rev 2733) +++ pkg/PortfolioAnalytics/DESCRIPTION 2013-08-06 22:17:14 UTC (rev 2734) @@ -49,3 +49,4 @@ 'charts.ROI.R' 'applyFUN.R' 'charts.PSO.R' + 'charts.GenSA.R' Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-06 21:47:01 UTC (rev 2733) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-06 22:17:14 UTC (rev 2734) @@ -6,14 +6,17 @@ export(box_constraint) export(CCCgarch.MM) export(chart.Scatter.DE) +export(chart.Scatter.GenSA) export(chart.Scatter.pso) export(chart.Scatter.ROI) export(chart.Scatter.RP) export(chart.Weights.DE) +export(chart.Weights.GenSA) export(chart.Weights.pso) export(chart.Weights.ROI) export(chart.Weights.RP) export(charts.DE) +export(charts.GenSA) export(charts.pso) export(charts.ROI) export(charts.RP) @@ -56,6 +59,7 @@ export(optimize.portfolio.rebalancing) export(optimize.portfolio) export(plot.optimize.portfolio.DEoptim) +export(plot.optimize.portfolio.GenSA) export(plot.optimize.portfolio.pso) export(plot.optimize.portfolio.random) export(plot.optimize.portfolio.ROI) Added: pkg/PortfolioAnalytics/R/charts.GenSA.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.GenSA.R (rev 0) +++ pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-06 22:17:14 UTC (rev 2734) @@ -0,0 +1,173 @@ +#' boxplot of the weights in the portfolio +#' +#' @param GenSA object created by \code{\link{optimize.portfolio}} +#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param las numeric in \{0,1,2,3\}; the style of axis labels +#' \describe{ +#' \item{0:}{always parallel to the axis [\emph{default}],} +#' \item{1:}{always horizontal,} +#' \item{2:}{always perpendicular to the axis,} +#' \item{3:}{always vertical.} +#' } +#' @param xlab a title for the x axis: see \code{\link{title}} +#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot lines +#' @param ... any other passthru parameters +#' @param main an overall title for the plot: see \code{\link{title}} +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +chart.Weights.GenSA <- function(GenSA, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ + + if(!inherits(GenSA, "optimize.portfolio.GenSA")) stop("GenSA must be of class 'optimize.portfolio.GenSA'") + + columnnames = names(GenSA$weights) + numassets = length(columnnames) + + constraints <- get_constraints(GenSA$portfolio) + + if(is.null(xlab)) + minmargin = 3 + else + minmargin = 5 + if(main=="") topmargin=1 else topmargin=4 + if(las > 1) {# set the bottom border to accommodate labels + bottommargin = max(c(minmargin, (strwidth(columnnames,units="in"))/par("cin")[1])) * cex.lab + if(bottommargin > 10 ) { + bottommargin<-10 + columnnames<-substr(columnnames,1,19) + # par(srt=45) #TODO figure out how to use text() and srt to rotate long labels + } + } + else { + bottommargin = minmargin + } + par(mar = c(bottommargin, 4, topmargin, 2) +.1) + plot(GenSA$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) + points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) + points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + # if(!is.null(neighbors)){ + # if(is.vector(neighbors)){ + # xtract=extractStats(ROI) + # weightcols<-grep('w\\.',colnames(xtract)) #need \\. to get the dot + # if(length(neighbors)==1){ + # # overplot nearby portfolios defined by 'out' + # orderx = order(xtract[,"out"]) + # subsetx = head(xtract[orderx,], n=neighbors) + # for(i in 1:neighbors) points(subsetx[i,weightcols], type="b", col="lightblue") + # } else{ + # # assume we have a vector of portfolio numbers + # subsetx = xtract[neighbors,weightcols] + # for(i in 1:length(neighbors)) points(subsetx[i,], type="b", col="lightblue") + # } + # } + # if(is.matrix(neighbors) | is.data.frame(neighbors)){ + # # the user has likely passed in a matrix containing calculated values for risk.col and return.col + # nbweights<-grep('w\\.',colnames(neighbors)) #need \\. to get the dot + # for(i in 1:nrow(neighbors)) points(as.numeric(neighbors[i,nbweights]), type="b", col="lightblue") + # # note that here we need to get weight cols separately from the matrix, not from xtract + # # also note the need for as.numeric. points() doesn't like matrix inputs + # } + # } + # points(ROI$weights, type="b", col="blue", pch=16) + axis(2, cex.axis = cex.axis, col = element.color) + axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) + box(col = element.color) +} + +#' classic risk return scatter of random portfolios +#' +#' The GenSA optimizer does not store the portfolio weights like DEoptim or random +#' portfolios so we will generate random portfolios for the scatter plot. +#' +#' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights +#' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights +#' +#' @param ROI object created by \code{\link{optimize.portfolio}} +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric +#' @param rp set of weights generated by \code{\link{random_portfolio}} +#' @param return.col string matching the objective of a 'return' objective, on vertical axis +#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis +#' @param ... any other passthru parameters +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot scatter points +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +chart.Scatter.GenSA <- function(GenSA, R, rp=NULL, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ + + # If the user does not pass in rp, then we will generate random portfolios + if(is.null(rp)){ + permutations <- match.call(expand.dots=TRUE)$permutations + if(is.null(permutations)) permutations <- 2000 + rp <- random_portfolios(portfolio=GenSA$portfolio, permutations=permutations) + } + + # Get the optimal weights from the output of optimize.portfolio + wts <- GenSA$weights + + # cbind the optimal weights and random portfolio weights + rp <- rbind(wts, rp) + + returnpoints <- applyFUN(R=R, weights=rp, FUN=return.col, ...=...) + riskpoints <- applyFUN(R=R, weights=rp, FUN=risk.col, ...=...) + + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, main=main) + points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal + axis(1, cex.axis = cex.axis, col = element.color) + axis(2, cex.axis = cex.axis, col = element.color) + box(col = element.color) +} + +#' scatter and weights chart for portfolios +#' +#' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights +#' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights +#' +#' @param GenSA object created by \code{\link{optimize.portfolio}} +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric +#' @param rp set of weights generated by \code{\link{random_portfolio}} +#' @param return.col string matching the objective of a 'return' objective, on vertical axis +#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis +#' @param ... any other passthru parameters +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot scatter points +#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param main an overall title for the plot: see \code{\link{title}} +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +charts.GenSA <- function(GenSA, R, rp=NULL, return.col="mean", risk.col="StdDev", + cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ + # Specific to the output of the optimize_method=pso + op <- par(no.readonly=TRUE) + layout(matrix(c(1,2)),height=c(2,2),width=1) + par(mar=c(4,4,4,2)) + chart.Scatter.GenSA(GenSA=GenSA, R=R, rp=rp, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + par(mar=c(2,4,0,2)) + chart.Weights.GenSA(GenSA=GenSA, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") + par(op) +} + +#' scatter and weights chart for portfolios +#' +#' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights +#' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights +#' +#' @param GenSA object created by \code{\link{optimize.portfolio}} +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric +#' @param rp set of weights generated by \code{\link{random_portfolio}} +#' @param return.col string matching the objective of a 'return' objective, on vertical axis +#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis +#' @param ... any other passthru parameters +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot scatter points +#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param main an overall title for the plot: see \code{\link{title}} +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +plot.optimize.portfolio.GenSA <- function(GenSA, R, rp=NULL, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ + charts.GenSA(GenSA=GenSA, R=R, rp=rp, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) +} Added: pkg/PortfolioAnalytics/man/chart.Scatter.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Scatter.GenSA.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Scatter.GenSA.Rd 2013-08-06 22:17:14 UTC (rev 2734) @@ -0,0 +1,51 @@ +\name{chart.Scatter.GenSA} +\alias{chart.Scatter.GenSA} +\title{classic risk return scatter of random portfolios} +\usage{ + chart.Scatter.GenSA(GenSA, R, rp = NULL, + return.col = "mean", risk.col = "StdDev", ..., + element.color = "darkgray", cex.axis = 0.8, main = "") +} +\arguments{ + \item{ROI}{object created by + \code{\link{optimize.portfolio}}} + + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns, used to recalulate the + risk and return metric} + + \item{rp}{set of weights generated by + \code{\link{random_portfolio}}} + + \item{return.col}{string matching the objective of a + 'return' objective, on vertical axis} + + \item{risk.col}{string matching the objective of a 'risk' + objective, on horizontal axis} + + \item{...}{any other passthru parameters} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot scatter + points} +} +\description{ + The GenSA optimizer does not store the portfolio weights + like DEoptim or random portfolios so we will generate + random portfolios for the scatter plot. +} +\details{ + \code{return.col} must be the name of a function used to + compute the return metric on the random portfolio weights + \code{risk.col} must be the name of a function used to + compute the risk metric on the random portfolio weights +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + Added: pkg/PortfolioAnalytics/man/chart.Weights.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.GenSA.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Weights.GenSA.Rd 2013-08-06 22:17:14 UTC (rev 2734) @@ -0,0 +1,47 @@ +\name{chart.Weights.GenSA} +\alias{chart.Weights.GenSA} +\title{boxplot of the weights in the portfolio} +\usage{ + chart.Weights.GenSA(GenSA, neighbors = NULL, ..., + main = "Weights", las = 3, xlab = NULL, cex.lab = 1, + element.color = "darkgray", cex.axis = 0.8) +} +\arguments{ + \item{GenSA}{object created by + \code{\link{optimize.portfolio}}} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot} + + \item{las}{numeric in \{0,1,2,3\}; the style of axis + labels \describe{ \item{0:}{always parallel to the axis + [\emph{default}],} \item{1:}{always horizontal,} + \item{2:}{always perpendicular to the axis,} + \item{3:}{always vertical.} }} + + \item{xlab}{a title for the x axis: see + \code{\link{title}}} + + \item{cex.lab}{The magnification to be used for x and y + labels relative to the current setting of \code{cex}} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot lines} + + \item{...}{any other passthru parameters} + + \item{main}{an overall title for the plot: see + \code{\link{title}}} +} +\description{ + boxplot of the weights in the portfolio +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + Added: pkg/PortfolioAnalytics/man/charts.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.GenSA.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/charts.GenSA.Rd 2013-08-06 22:17:14 UTC (rev 2734) @@ -0,0 +1,53 @@ +\name{charts.GenSA} +\alias{charts.GenSA} +\title{scatter and weights chart for portfolios} +\usage{ + charts.GenSA(GenSA, R, rp = NULL, return.col = "mean", + risk.col = "StdDev", cex.axis = 0.8, + element.color = "darkgray", neighbors = NULL, + main = "GenSA.Portfolios", ...) +} +\arguments{ + \item{GenSA}{object created by + \code{\link{optimize.portfolio}}} + + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns, used to recalulate the + risk and return metric} + + \item{rp}{set of weights generated by + \code{\link{random_portfolio}}} + + \item{return.col}{string matching the objective of a + 'return' objective, on vertical axis} + + \item{risk.col}{string matching the objective of a 'risk' + objective, on horizontal axis} + + \item{...}{any other passthru parameters} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot scatter + points} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot} + + \item{main}{an overall title for the plot: see + \code{\link{title}}} +} +\description{ + \code{return.col} must be the name of a function used to + compute the return metric on the random portfolio weights + \code{risk.col} must be the name of a function used to + compute the risk metric on the random portfolio weights +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + Added: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd 2013-08-06 22:17:14 UTC (rev 2734) @@ -0,0 +1,53 @@ +\name{plot.optimize.portfolio.GenSA} +\alias{plot.optimize.portfolio.GenSA} +\title{scatter and weights chart for portfolios} +\usage{ + plot.optimize.portfolio.GenSA(GenSA, R, rp = NULL, + return.col = "mean", risk.col = "StdDev", + cex.axis = 0.8, element.color = "darkgray", + neighbors = NULL, main = "GenSA.Portfolios", ...) +} +\arguments{ + \item{GenSA}{object created by + \code{\link{optimize.portfolio}}} + + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns, used to recalulate the + risk and return metric} + + \item{rp}{set of weights generated by + \code{\link{random_portfolio}}} + + \item{return.col}{string matching the objective of a + 'return' objective, on vertical axis} + + \item{risk.col}{string matching the objective of a 'risk' + objective, on horizontal axis} + + \item{...}{any other passthru parameters} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot scatter + points} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot} + + \item{main}{an overall title for the plot: see + \code{\link{title}}} +} +\description{ + \code{return.col} must be the name of a function used to + compute the return metric on the random portfolio weights + \code{risk.col} must be the name of a function used to + compute the risk metric on the random portfolio weights +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}} +} + From noreply at r-forge.r-project.org Wed Aug 7 00:18:06 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 7 Aug 2013 00:18:06 +0200 (CEST) Subject: [Returnanalytics-commits] r2735 - pkg/PortfolioAnalytics/R Message-ID: <20130806221806.BA8391859FA@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-07 00:18:06 +0200 (Wed, 07 Aug 2013) New Revision: 2735 Added: pkg/PortfolioAnalytics/R/charts.PSO.R Log: adding chart methods for pso optimization method Added: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R (rev 0) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-06 22:18:06 UTC (rev 2735) @@ -0,0 +1,165 @@ +#' boxplot of the weights in the portfolio +#' +#' @param pso object created by \code{\link{optimize.portfolio}} +#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param las numeric in \{0,1,2,3\}; the style of axis labels +#' \describe{ +#' \item{0:}{always parallel to the axis [\emph{default}],} +#' \item{1:}{always horizontal,} +#' \item{2:}{always perpendicular to the axis,} +#' \item{3:}{always vertical.} +#' } +#' @param xlab a title for the x axis: see \code{\link{title}} +#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot lines +#' @param ... any other passthru parameters +#' @param main an overall title for the plot: see \code{\link{title}} +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +chart.Weights.pso <- function(pso, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ + + if(!inherits(pso, "optimize.portfolio.pso")) stop("pso must be of class 'optimize.portfolio.pso'") + + columnnames = names(pso$weights) + numassets = length(columnnames) + + constraints <- get_constraints(pso$portfolio) + + if(is.null(xlab)) + minmargin = 3 + else + minmargin = 5 + if(main=="") topmargin=1 else topmargin=4 + if(las > 1) {# set the bottom border to accommodate labels + bottommargin = max(c(minmargin, (strwidth(columnnames,units="in"))/par("cin")[1])) * cex.lab + if(bottommargin > 10 ) { + bottommargin<-10 + columnnames<-substr(columnnames,1,19) + # par(srt=45) #TODO figure out how to use text() and srt to rotate long labels + } + } + else { + bottommargin = minmargin + } + par(mar = c(bottommargin, 4, topmargin, 2) +.1) + plot(pso$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) + points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) + points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + # if(!is.null(neighbors)){ + # if(is.vector(neighbors)){ + # xtract=extractStats(ROI) + # weightcols<-grep('w\\.',colnames(xtract)) #need \\. to get the dot + # if(length(neighbors)==1){ + # # overplot nearby portfolios defined by 'out' + # orderx = order(xtract[,"out"]) + # subsetx = head(xtract[orderx,], n=neighbors) + # for(i in 1:neighbors) points(subsetx[i,weightcols], type="b", col="lightblue") + # } else{ + # # assume we have a vector of portfolio numbers + # subsetx = xtract[neighbors,weightcols] + # for(i in 1:length(neighbors)) points(subsetx[i,], type="b", col="lightblue") + # } + # } + # if(is.matrix(neighbors) | is.data.frame(neighbors)){ + # # the user has likely passed in a matrix containing calculated values for risk.col and return.col + # nbweights<-grep('w\\.',colnames(neighbors)) #need \\. to get the dot + # for(i in 1:nrow(neighbors)) points(as.numeric(neighbors[i,nbweights]), type="b", col="lightblue") + # # note that here we need to get weight cols separately from the matrix, not from xtract + # # also note the need for as.numeric. points() doesn't like matrix inputs + # } + # } + # points(ROI$weights, type="b", col="blue", pch=16) + axis(2, cex.axis = cex.axis, col = element.color) + axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) + box(col = element.color) +} + + +#' classic risk return scatter of random portfolios +#' +#' +#' \code{return.col} must be the name of a function used to compute the return metric on the portfolio weights +#' \code{risk.col} must be the name of a function used to compute the risk metric on the portfolio weights +#' +#' @param pso object created by \code{\link{optimize.portfolio}} +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric +#' @param return.col string matching the objective of a 'return' objective, on vertical axis +#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis +#' @param ... any other passthru parameters +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot scatter points +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +chart.Scatter.pso <- function(pso, R, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ + if(!inherits(pso, "optimize.portfolio.pso")) stop("pso must be of class 'optimize.portfolio.pso'") + + # Object with the "out" value in the first column and the normalized weights + # The first row is the optimal "out" value and the optimal weights + tmp <- extractStats(pso) + + # Get the weights + wts <- tmp[,-1] + + returnpoints <- applyFUN(R=R, weights=wts, FUN=return.col, ...=...) + riskpoints <- applyFUN(R=R, weights=wts, FUN=risk.col, ...=...) + + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, main=main) + points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal + axis(1, cex.axis = cex.axis, col = element.color) + axis(2, cex.axis = cex.axis, col = element.color) + box(col = element.color) +} + +#' scatter and weights chart for portfolios +#' +#' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights +#' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights +#' +#' @param pso object created by \code{\link{optimize.portfolio}} +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric +#' @param return.col string matching the objective of a 'return' objective, on vertical axis +#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis +#' @param ... any other passthru parameters +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot scatter points +#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param main an overall title for the plot: see \code{\link{title}} +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +charts.pso <- function(pso, R, return.col="mean", risk.col="StdDev", + cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ + # Specific to the output of the optimize_method=pso + op <- par(no.readonly=TRUE) + layout(matrix(c(1,2)),height=c(2,2),width=1) + par(mar=c(4,4,4,2)) + chart.Scatter.pso(pso=pso, R=R, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + par(mar=c(2,4,0,2)) + chart.Weights.pso(pso=pso, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") + par(op) +} + +#' scatter and weights chart for portfolios +#' +#' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights +#' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights +#' +#' @param pso object created by \code{\link{optimize.portfolio}} +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric +#' @param return.col string matching the objective of a 'return' objective, on vertical axis +#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis +#' @param ... any other passthru parameters +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot scatter points +#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param main an overall title for the plot: see \code{\link{title}} +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +plot.optimize.portfolio.pso <- function(pso, R, return.col="mean", risk.col="StdDev", + cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ + charts.pso(pso=pso, R=R, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) +} From noreply at r-forge.r-project.org Wed Aug 7 00:23:17 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 7 Aug 2013 00:23:17 +0200 (CEST) Subject: [Returnanalytics-commits] r2736 - pkg/PortfolioAnalytics/R Message-ID: <20130806222317.AE93F185B5C@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-07 00:23:17 +0200 (Wed, 07 Aug 2013) New Revision: 2736 Modified: pkg/PortfolioAnalytics/R/charts.ROI.R Log: correcting way to specify permutations for chart.Scatter.ROI Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-06 22:18:06 UTC (rev 2735) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-06 22:23:17 UTC (rev 2736) @@ -101,7 +101,8 @@ # If the user does not pass in rp, then we will generate random portfolios if(is.null(rp)){ - if(!hasArg(permutations)) permutations <- 2000 + permutations <- match.call(expand.dots=TRUE)$permutations + if(is.null(permutations)) permutations <- 2000 rp <- random_portfolios(portfolio=ROI$portfolio, permutations=permutations) } From noreply at r-forge.r-project.org Wed Aug 7 18:33:07 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 7 Aug 2013 18:33:07 +0200 (CEST) Subject: [Returnanalytics-commits] r2737 - pkg/PerformanceAnalytics/sandbox/pulkit/week5 Message-ID: <20130807163307.21C26185852@r-forge.r-project.org> Author: pulkit Date: 2013-08-07 18:33:06 +0200 (Wed, 07 Aug 2013) New Revision: 2737 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week5/ret.csv Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week5/REDDCOPS.R Log: Test data for reddcops two risky assets Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week5/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week5/REDDCOPS.R 2013-08-06 22:23:17 UTC (rev 2736) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week5/REDDCOPS.R 2013-08-07 16:33:06 UTC (rev 2737) @@ -39,8 +39,15 @@ #' #'dt<-read.zoo("returns.csv",sep=",",header = TRUE) #'dt<-as.xts(dt) -#'REDDCOPS(dt[,1],delta = 0.33,Rf = (1+dt[,2])^(1/12)-1,h = 12,geometric = TRUE,asset = "one") +#'REDDCOPS(dt[,1],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "one") #' +#' +#' # with S&P 500 , barclays and T-bill data +#' +#'dt<-read.zoo("ret.csv",sep=";",header = TRUE) +#'dt<-as.xts(dt) +#'REDDCOPS(dt[,1:2],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "two") +#' #'data(edhec) #'REDDCOPS(edhec,delta = 0.1,Rf = 0,h = 40) #'data(managers) @@ -128,7 +135,6 @@ xt = column.xt else xt = merge(xt, column.xt) } - print(xt) colnames(xt) = columnnames xt = reclass(xt, x) return(xt) Added: pkg/PerformanceAnalytics/sandbox/pulkit/week5/ret.csv =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week5/ret.csv (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week5/ret.csv 2013-08-07 16:33:06 UTC (rev 2737) @@ -0,0 +1,433 @@ +Index;S.P;barc[1:432, 2];X3.month.T.bill +1976-01-31;0.1183057989;0.0109;0.0473 +1976-02-29;-0.0114019433;0.0033;0.05 +1976-03-31;0.0306889981;0.0214;0.0497 +1976-04-30;-0.0109954267;-0.0027;0.0491 +1976-05-31;-0.0143644235;-0.0128;0.0549 +1976-06-30;0.0409263326;0.0199;0.0537 +1976-07-31;-0.0080552359;0.006;0.0517 +1976-08-31;-0.0051237432;0.0238;0.0509 +1976-09-30;0.0226411427;0.014;0.0506 +1976-10-31;-0.0222348917;0.0098;0.0489 +1976-11-30;-0.0077745384;0.0333;0.0442 +1976-12-31;0.0524975514;0.0352;0.0434 +1977-01-31;-0.0505304299;-0.0373;0.0472 +1977-02-28;-0.021660296;-0.0027;0.047 +1977-03-31;-0.0140252454;0.0081;0.0454 +1977-04-30;0.0002032107;0.008;0.0469 +1977-05-31;-0.0235676554;0.0109;0.0503 +1977-06-30;0.0453599667;0.0171;0.0498 +1977-07-31;-0.0162221338;-0.0057;0.054 +1977-08-31;-0.0210419828;0.0183;0.0556 +1977-09-30;-0.0024801075;-0.0034;0.0589 +1977-10-31;-0.043406195;-0.0105;0.0618 +1977-11-30;0.0269655621;0.0113;0.0604 +1977-12-31;0.0028472003;-0.0141;0.0613 +1978-01-31;-0.0615141956;-0.0086;0.0642 +1978-02-28;-0.0247619048;-1e-04;0.0642 +1978-03-31;0.0249310662;-0.0032;0.0647 +1978-04-30;0.0854164331;6e-04;0.0635 +1978-05-31;0.0042342249;-0.006;0.0665 +1978-06-30;-0.0175853558;-0.0061;0.0701 +1978-07-31;0.0539097666;0.0154;0.0676 +1978-08-31;0.0259237187;0.0222;0.0754 +1978-09-30;-0.0072611095;-0.0099;0.0798 +1978-10-31;-0.0915740199;-0.0211;0.0875 +1978-11-30;0.0166398282;0.0182;0.0901 +1978-12-31;0.0148891235;-0.0138;0.0926 +1979-01-31;0.0397461242;0.0203;0.0929 +1979-02-28;-0.0365255679;-0.0142;0.0945 +1979-03-31;0.055151641;0.0135;0.0944 +1979-04-30;0.0016733931;-0.0118;0.0956 +1979-05-31;-0.026336478;0.0242;0.0957 +1979-06-30;0.0386556318;0.0333;0.0895 +1979-07-31;0.0087455058;-0.0066;0.0918 +1979-08-31;0.0530777382;-0.0033;0.0981 +1979-09-30;0;-0.0093;0.1012 +1979-10-31;-0.0686059276;-0.0819;0.1212 +1979-11-30;0.0426242389;0.0298;0.1149 +1979-12-31;0.0167671439;0.0038;0.1204 +1980-01-31;0.0576246063;-0.077;0.12 +1980-02-29;-0.0043798178;-0.0677;0.1401 +1980-03-31;-0.1017948267;-0.0066;0.1424 +1980-04-30;0.0411401704;0.1463;0.1039 +1980-05-31;0.0465707028;0.0401;0.0775 +1980-06-30;0.0269687163;0.0347;0.0788 +1980-07-31;0.0650385154;-0.0417;0.0862 +1980-08-31;0.0058354566;-0.0408;0.0996 +1980-09-30;0.025167511;-0.028;0.1144 +1980-10-31;0.0160210426;-0.0305;0.1271 +1980-11-30;0.1023770299;0.0175;0.1448 +1980-12-31;-0.0338741816;0.0385;0.143 +1981-01-31;-0.0457424867;-0.0133;0.1459 +1981-02-28;0.0132767271;-0.0424;0.1422 +1981-03-31;0.0360326046;0.0309;0.1246 +1981-04-30;-0.0234558824;-0.0563;0.1486 +1981-05-31;-0.0016565018;0.0692;0.151 +1981-06-30;-0.0104080247;-0.0228;0.1428 +1981-07-31;-0.0022101974;-0.0334;0.1487 +1981-08-31;-0.0620989918;-0.0434;0.1552 +1981-09-30;-0.0538317453;-0.0177;0.1434 +1981-10-31;0.049147874;0.0772;0.1275 +1981-11-30;0.0365903684;0.1075;0.1037 +1981-12-31;-0.030075188;-0.0447;0.1108 +1982-01-31;-0.0175438596;2e-04;0.1252 +1982-02-28;-0.0605481728;0.0179;0.1244 +1982-03-31;-0.010167094;0.0233;0.1326 +1982-04-30;0.0400142908;0.0317;0.1234 +1982-05-31;-0.0391618001;0.0073;0.115 +1982-06-30;-0.020289596;-0.0218;0.1276 +1982-07-31;-0.022990603;0.0441;0.1017 +1982-08-31;0.1159772154;0.0783;0.0842 +1982-09-30;0.0076144256;0.0694;0.0762 +1982-10-31;0.1104467696;0.0891;0.079 +1982-11-30;0.035970685;-0.0057;0.0828 +1982-12-31;0.0152313578;0.0245;0.0792 +1983-01-31;0.0331342435;-0.0356;0.081 +1983-02-28;0.0189951824;0.055;0.0793 +1983-03-31;0.0330946913;-0.0057;0.0864 +1983-04-30;0.0749869247;0.0378;0.0808 +1983-05-31;-0.0124064952;-0.0408;0.0863 +1983-06-30;0.0323295769;0.0085;0.0879 +1983-07-31;-0.0303030303;-0.0553;0.0922 +1983-08-31;0.0113188976;0.0055;0.0926 +1983-09-30;0.0101581509;0.0519;0.0871 +1983-10-31;-0.0151743241;-0.0159;0.0851 +1983-11-30;0.0174258637;0.0207;0.0888 +1983-12-31;-0.0088341346;-0.0059;0.0897 +1984-01-31;-0.009216031;0.0182;0.0889 +1984-02-29;-0.0388593109;-0.0208;0.0914 +1984-03-31;0.0134980262;-0.0219;0.0972 +1984-04-30;0.0054655107;-0.0135;0.0972 +1984-05-31;-0.0593564511;-0.055;0.0975 +1984-06-30;0.0174692793;0.017;0.0992 +1984-07-31;-0.0164512338;0.0775;0.104 +1984-08-31;0.1063321386;0.0249;0.1063 +1984-09-30;-0.0034797216;0.0316;0.1022 +1984-10-31;-6.02046959662372e-05;0.0632;0.0901 +1984-11-30;-0.0151122885;0.0107;0.0844 +1984-12-31;0.0223743734;0.0134;0.0785 +1985-01-31;0.0740851471;0.0342;0.0805 +1985-02-28;0.0086288482;-0.0502;0.085 +1985-03-31;-0.002870074;0.0311;0.0818 +1985-04-30;-0.0045942655;0.0237;0.0785 +1985-05-31;0.0540510482;0.0889;0.0714 +1985-06-30;0.0121340016;0.0137;0.0683 +1985-07-31;-0.0048475371;-0.0134;0.0728 +1985-08-31;-0.0119945527;0.0271;0.0714 +1985-09-30;-0.034724063;3e-04;0.0704 +1985-10-31;0.0425087873;0.0373;0.0719 +1985-11-30;0.0650616373;0.0415;0.0716 +1985-12-31;0.0450610872;0.0597;0.0705 +1986-01-31;0.0023665278;0.0015;0.0697 +1986-02-28;0.0714892813;0.109;0.0702 +1986-03-31;0.0527939362;0.0892;0.0634 +1986-04-30;-0.0141481792;-0.0056;0.061 +1986-05-31;0.0502292799;-0.0591;0.063 +1986-06-30;0.0141095614;0.0655;0.0596 +1986-07-31;-0.0586828257;-0.0104;0.0579 +1986-08-31;0.0711926139;0.0486;0.0517 +1986-09-30;-0.0854386589;-0.0393;0.052 +1986-10-31;0.0547293792;0.0183;0.052 +1986-11-30;0.0214771703;0.0184;0.0539 +1986-12-31;-0.0282882594;-0.002;0.0567 +1987-01-31;0.1317669406;0.0178;0.056 +1987-02-28;0.036923526;0.0134;0.0545 +1987-03-31;0.0263898663;-0.0199;0.0561 +1987-04-30;-0.01145012;-0.0501;0.0553 +1987-05-31;0.006034124;-0.0131;0.0568 +1987-06-30;0.0479145122;0.0119;0.0573 +1987-07-31;0.0482236842;-0.0199;0.0607 +1987-08-31;0.0349588904;-0.0187;0.0625 +1987-09-30;-0.0241661613;-0.0454;0.0661 +1987-10-31;-0.217630426;0.076;0.0527 +1987-11-30;-0.0853489019;6e-04;0.0521 +1987-12-31;0.072861485;0.0198;0.0568 +1988-01-31;0.0404322487;0.064;0.0564 +1988-02-29;0.0418174038;0.011;0.0562 +1988-03-31;-0.0333432903;-0.0329;0.0571 +1988-04-30;0.0094248523;-0.0196;0.0598 +1988-05-31;0.0031760609;-0.0167;0.0643 +1988-06-30;0.0432560269;0.0442;0.0656 +1988-07-31;-0.0054113346;-0.0223;0.0695 +1988-08-31;-0.0386001029;0.0043;0.073 +1988-09-30;0.039729275;0.0386;0.0725 +1988-10-31;0.0259644735;0.0326;0.0736 +1988-11-30;-0.0188909202;-0.0236;0.0783 +1988-12-31;0.0146876142;0.0146;0.081 +1989-01-31;0.0711147919;0.0222;0.0839 +1989-02-28;-0.0289440952;-0.022;0.0871 +1989-03-31;0.0208059267;0.0106;0.089 +1989-04-30;0.0500898701;0.0259;0.0841 +1989-05-31;0.0351375791;0.0378;0.0861 +1989-06-30;-0.0079246225;0.0618;0.0799 +1989-07-31;0.0883703378;0.0236;0.078 +1989-08-31;0.0155166436;-0.0283;0.0789 +1989-09-30;-0.0065443164;0.0028;0.0791 +1989-10-31;-0.025175426;0.0426;0.0777 +1989-11-30;0.0165413092;0.009;0.0759 +1989-12-31;0.021416804;-0.002;0.0755 +1990-01-31;-0.0688172043;-0.0392;0.0774 +1990-02-28;0.0085389571;-0.0037;0.0777 +1990-03-31;0.0242550243;-0.0056;0.078 +1990-04-30;-0.0268870977;-0.0255;0.0779 +1990-05-31;0.0919891173;0.049;0.0775 +1990-06-30;-0.0088863051;0.0242;0.0774 +1990-07-31;-0.0052231719;0.0091;0.0749 +1990-08-31;-0.0943141935;-0.0467;0.0739 +1990-09-30;-0.0511842758;0.0115;0.0714 +1990-10-31;-0.0066982519;0.0235;0.0711 +1990-11-30;0.0599342105;0.0442;0.0702 +1990-12-31;0.0248277574;0.0204;0.0644 +1991-01-31;0.041517776;0.0122;0.0619 +1991-02-28;0.0672811328;0.0041;0.0604 +1991-03-31;0.0222028496;0.0034;0.0574 +1991-04-30;0.0003198124;0.0132;0.0551 +1991-05-31;0.0386049981;-0.002;0.0553 +1991-06-30;-0.0478926712;-0.009;0.0554 +1991-07-31;0.0448593598;0.0159;0.0553 +1991-08-31;0.0196487971;0.0359;0.0533 +1991-09-30;-0.019143717;0.0322;0.0511 +1991-10-31;0.011834167;7e-04;0.0482 +1991-11-30;-0.043903682;0.004;0.0435 +1991-12-31;0.1115878685;0.0628;0.0388 +1992-01-31;-0.0199237575;-0.0319;0.0384 +1992-02-29;0.0095895103;0.0054;0.0393 +1992-03-31;-0.0218318391;-0.0113;0.0405 +1992-04-30;0.0278926899;-0.0023;0.037 +1992-05-31;0.0009639716;0.0289;0.037 +1992-06-30;-0.017358854;0.013;0.0357 +1992-07-31;0.0393737443;0.0434;0.0318 +1992-08-31;-0.0239975484;0.006;0.0316 +1992-09-30;0.0091056204;0.0146;0.0269 +1992-10-31;0.0021062709;-0.0211;0.0296 +1992-11-30;0.0302617751;0.0056;0.0327 +1992-12-31;0.0101078011;0.0287;0.0308 +1993-01-31;0.0070459709;0.0277;0.029 +1993-02-28;0.0104836137;0.0349;0.0295 +1993-03-31;0.01869728;0.0021;0.0289 +1993-04-30;-0.0254167866;0.0072;0.0291 +1993-05-31;0.0227174629;0.0044;0.0306 +1993-06-30;0.0007552367;0.0441;0.0303 +1993-07-31;-0.0053270592;0.0193;0.0303 +1993-08-31;0.0344319729;0.0433;0.0301 +1993-09-30;-0.0099879196;0.0024;0.0292 +1993-10-31;0.0193929357;0.0082;0.0303 +1993-11-30;-0.0129106727;-0.027;0.0314 +1993-12-31;0.010091167;0.0023;0.0301 +1994-01-31;0.0325008039;0.0249;0.0296 +1994-02-28;-0.0300450572;-0.0434;0.0336 +1994-03-31;-0.0457464572;-0.0456;0.0348 +1994-04-30;0.01153061;-0.0124;0.0387 +1994-05-31;0.0123971524;-0.0083;0.0416 +1994-06-30;-0.0267907996;-0.0104;0.0415 +1994-07-31;0.0314898598;0.037;0.0427 +1994-08-31;0.0375987431;-0.0098;0.0456 +1994-09-30;-0.0268775369;-0.0342;0.0467 +1994-10-31;0.0208337836;-0.0033;0.0503 +1994-11-30;-0.0395046046;0.0071;0.0556 +1994-12-31;0.012299147;0.0165;0.0553 +1995-01-31;0.024277658;0.0272;0.0583 +1995-02-28;0.0360741465;0.0281;0.0576 +1995-03-31;0.0273292435;0.0095;0.057 +1995-04-30;0.0279602964;0.018;0.0569 +1995-05-31;0.0363117095;0.0824;0.0563 +1995-06-30;0.0212785902;0.0121;0.0544 +1995-07-31;0.0317760441;-0.0177;0.0542 +1995-08-31;-0.0003202505;0.024;0.0529 +1995-09-30;0.0400975297;0.0198;0.0524 +1995-10-31;-0.0049793809;0.03;0.0532 +1995-11-30;0.0410490112;0.026;0.0532 +1995-12-31;0.0174438773;0.0288;0.0496 +1996-01-31;0.0326173429;-0.0012;0.0491 +1996-02-29;0.0069337442;-0.0531;0.0489 +1996-03-31;0.0079165561;-0.0216;0.05 +1996-04-30;0.0134314485;-0.0177;0.0501 +1996-05-31;0.0228533867;-0.0053;0.0504 +1996-06-30;0.0022566954;0.0223;0.0504 +1996-07-31;-0.045748028;0;0.0518 +1996-08-31;0.0188139698;-0.0143;0.0515 +1996-09-30;0.0542032853;0.0292;0.0491 +1996-10-31;0.0261009995;0.0419;0.0503 +1996-11-30;0.0733761538;0.0357;0.05 +1996-12-31;-0.0215053763;-0.0261;0.0507 +1997-01-31;0.0613170613;-0.009;0.0502 +1997-02-28;0.0059275466;2e-04;0.0509 +1997-03-31;-0.0426139956;-0.0275;0.0521 +1997-04-30;0.0584055368;0.0254;0.0514 +1997-05-31;0.0585768837;0.0116;0.0482 +1997-06-30;0.0434526336;0.0209;0.0506 +1997-07-31;0.07814583;0.0641;0.0511 +1997-08-31;-0.0574656034;-0.0309;0.051 +1997-09-30;0.0531535237;0.0296;0.0493 +1997-10-31;-0.0344776624;0.0368;0.0507 +1997-11-30;0.0445868229;0.0157;0.0508 +1997-12-31;0.0157316307;0.0177;0.0522 +1998-01-31;0.0101501396;0.021;0.0506 +1998-02-28;0.0704492594;-0.008;0.0518 +1998-03-31;0.0499456801;0.002;0.0502 +1998-04-30;0.0090764693;0.0034;0.0487 +1998-05-31;-0.0188261749;0.0212;0.0489 +1998-06-30;0.0394382208;0.0263;0.0497 +1998-07-31;-0.0116153955;-0.0057;0.0497 +1998-08-31;-0.1457967109;0.0489;0.0477 +1998-09-30;0.0623955374;0.0368;0.0426 +1998-10-31;0.0802941957;-0.017;0.0423 +1998-11-30;0.0591260342;0.0092;0.0442 +1998-12-31;0.0563753083;-0.0023;0.0437 +1999-01-31;0.0410094124;0.0095;0.0437 +1999-02-28;-0.032282517;-0.0534;0.0455 +1999-03-31;0.0387941825;-0.0044;0.0437 +1999-04-30;0.0379439819;8e-04;0.0443 +1999-05-31;-0.024970416;-0.0159;0.0453 +1999-06-30;0.0544383334;-0.0122;0.0465 +1999-07-31;-0.0320460986;-0.0057;0.0462 +1999-08-31;-0.0062541393;-0.0047;0.0484 +1999-09-30;-0.0285517377;0.0072;0.0474 +1999-10-31;0.0625394672;1e-04;0.0497 +1999-11-30;0.0190618741;-0.0076;0.0515 +1999-12-31;0.0578439208;-0.0177;0.0517 +2000-01-31;-0.0509035222;0.0174;0.0553 +2000-02-29;-0.0201081422;0.0355;0.0562 +2000-03-31;0.0967198958;0.0374;0.0572 +2000-04-30;-0.03079582;-0.01;0.0566 +2000-05-31;-0.0219149976;-0.005;0.0548 +2000-06-30;0.0239335492;0.0217;0.0571 +2000-07-31;-0.0163412622;0.0198;0.0603 +2000-08-31;0.0606990348;0.0244;0.0613 +2000-09-30;-0.0534829477;-0.0171;0.0605 +2000-10-31;-0.0049494957;0.0167;0.0619 +2000-11-30;-0.0800685602;0.0339;0.0603 +2000-12-31;0.0040533861;0.0234;0.0573 +2001-01-31;0.0346365922;-1e-04;0.0486 +2001-02-28;-0.092290686;0.0176;0.0473 +2001-03-31;-0.0642047196;-0.008;0.042 +2001-04-30;0.0768143545;-0.0314;0.0386 +2001-05-31;0.005090199;0.0017;0.0355 +2001-06-30;-0.025035435;0.0103;0.0357 +2001-07-31;-0.0107401297;0.0395;0.0346 +2001-08-31;-0.0641083857;0.024;0.033 +2001-09-30;-0.0817233896;0.0023;0.0235 +2001-10-31;0.0180990259;0.0595;0.0201 +2001-11-30;0.0751759799;-0.053;0.0175 +2001-12-31;0.0075738295;-0.0216;0.0171 +2002-01-31;-0.0155738276;0.014;0.0173 +2002-02-28;-0.0207662361;0.011;0.0176 +2002-03-31;0.0367388613;-0.0462;0.0176 +2002-04-30;-0.0614176522;0.041;0.0174 +2002-05-31;-0.0090814545;1e-04;0.0171 +2002-06-30;-0.0724553479;0.0177;0.0167 +2002-07-31;-0.0790042634;0.0317;0.0168 +2002-08-31;0.0048814199;0.0522;0.0166 +2002-09-30;-0.1100243431;0.0443;0.0154 +2002-10-31;0.0864488274;-0.0373;0.0142 +2002-11-30;0.0570696351;-0.0081;0.012 +2002-12-31;-0.0603325822;0.0433;0.012 +2003-01-31;-0.0274146985;-0.0031;0.0116 +2003-02-28;-0.0170036228;0.0316;0.0118 +2003-03-31;0.0083576057;-0.0167;0.0112 +2003-04-30;0.081044118;0.0128;0.0111 +2003-05-31;0.0508986607;0.0641;0.0109 +2003-06-30;0.0113222429;-0.0205;0.0089 +2003-07-31;0.0162237045;-0.103;0.0094 +2003-08-31;0.0178731912;0.0218;0.0096 +2003-09-30;-0.0119443259;0.0549;0.0093 +2003-10-31;0.0549614948;-0.0309;0.0094 +2003-11-30;0.0071285131;0.0058;0.0091 +2003-12-31;0.0507654508;0.0119;0.0093 +2004-01-31;0.0172764228;0.0201;0.009 +2004-02-29;0.0122090299;0.0217;0.0094 +2004-03-31;-0.0163589358;0.0153;0.0093 +2004-04-30;-0.0167908294;-0.0625;0.0096 +2004-05-31;0.0120834462;-0.0041;0.0106 +2004-06-30;0.0179890781;0.0097;0.0131 +2004-07-31;-0.0342905228;0.0183;0.0142 +2004-08-31;0.0022873325;0.0414;0.0157 +2004-09-30;0.0093639064;0.0109;0.0168 +2004-10-31;0.0140142475;0.017;0.0187 +2004-11-30;0.0385949389;-0.0248;0.022 +2004-12-31;0.0324581282;0.0276;0.0218 +2005-01-31;-0.0252904482;0.0352;0.0248 +2005-02-28;0.0189033836;-0.0132;0.0272 +2005-03-31;-0.0191176471;-0.0068;0.0273 +2005-04-30;-0.0201085898;0.0391;0.0284 +2005-05-31;0.0299520249;0.0306;0.0293 +2005-06-30;-0.0001426773;0.0203;0.0306 +2005-07-31;0.0359682036;-0.03;0.0334 +2005-08-31;-0.011222026;0.0335;0.0344 +2005-09-30;0.00694894;-0.0358;0.0347 +2005-10-31;-0.017740741;-0.0218;0.0389 +2005-11-30;0.0351861211;0.0069;0.0386 +2005-12-31;-0.0009523962;0.0288;0.0399 +2006-01-31;0.0254668386;-0.0126;0.0437 +2006-02-28;0.0004530967;0.0122;0.0451 +2006-03-31;0.0110958412;-0.0466;0.0452 +2006-04-30;0.0121556604;-0.0279;0.0465 +2006-05-31;-0.0309169013;0.001;0.0474 +2006-06-30;8.66080356511922e-05;0.0095;0.0487 +2006-07-31;0.0050858133;0.021;0.0497 +2006-08-31;0.0212742625;0.0318;0.0492 +2006-09-30;0.0245662745;0.0193;0.0477 +2006-10-31;0.0315080286;0.0084;0.0495 +2006-11-30;0.0164666096;0.0239;0.049 +2006-12-31;0.0126157515;-0.0275;0.0489 +2007-01-31;0.0140590848;-0.0108;0.0499 +2007-02-28;-0.0218461453;0.0353;0.0501 +2007-03-31;0.0099799548;-0.0175;0.049 +2007-04-30;0.0432906831;0.0082;0.0479 +2007-05-31;0.0325492286;-0.0215;0.046 +2007-06-30;-0.0178163097;-0.0104;0.0468 +2007-07-31;-0.0319819071;0.0315;0.0482 +2007-08-31;0.0128635923;0.0185;0.0391 +2007-09-30;0.0357940013;0.0031;0.0372 +2007-10-31;0.014822335;0.0154;0.0384 +2007-11-30;-0.0440434238;0.0536;0.0308 +2007-12-31;-0.0086284889;-0.0055;0.0329 +2008-01-31;-0.0611634749;0.0204;0.0192 +2008-02-29;-0.0347611621;-0.0017;0.0181 +2008-03-31;-0.0059595831;0.0166;0.0136 +2008-04-30;0.0475466848;-0.0233;0.0141 +2008-05-31;0.0106741532;-0.0244;0.0185 +2008-06-30;-0.0859623816;0.0243;0.0187 +2008-07-31;-0.009859375;-0.0062;0.0165 +2008-08-31;0.0121905032;0.0318;0.0169 +2008-09-30;-0.0907914533;0.0137;0.009 +2008-10-31;-0.1694245344;-0.0202;0.0044 +2008-11-30;-0.0748490323;0.1453;1e-04 +2008-12-31;0.0078215657;0.1343;0.0011 +2009-01-31;-0.0856573485;-0.1309;0.0024 +2009-02-28;-0.1099312249;-0.0153;0.0026 +2009-03-31;0.0854045083;0.0414;0.0021 +2009-04-30;0.0939250755;-0.0683;0.0014 +2009-05-31;0.0530814267;-0.0355;0.0014 +2009-06-30;0.0001958352;0.0083;0.0019 +2009-07-31;0.074141757;0.0045;0.0018 +2009-08-31;0.0335601734;0.0232;0.0015 +2009-09-30;0.0357233838;0.0246;0.0014 +2009-10-31;-0.0197619858;-0.0242;5e-04 +2009-11-30;0.057363997;0.0101;6e-04 +2009-12-31;0.0177705977;-0.0622;6e-04 +2010-01-31;-0.0369742624;0.0259;8e-04 +2010-02-28;0.0285136935;0.0011;0.0013 +2010-03-31;0.0587963676;-0.0247;0.0016 +2010-04-30;0.0147593272;0.0334;0.0016 +2010-05-31;-0.0819759162;0.0521;0.0016 +2010-06-30;-0.0538823767;0.0551;0.0018 +2010-07-31;0.0687778328;-0.0063;0.0015 +2010-08-31;-0.0474491649;0.0808;0.0014 +2010-09-30;0.087551104;-0.0225;0.0016 +2010-10-31;0.0368559411;-0.0465;0.0012 +2010-11-30;-0.0022902828;-0.0128;0.0017 +2010-12-31;0.065300072;-0.0368;0.0012 +2011-01-31;0.0226455902;-0.0308;0.0015 +2011-02-28;0.0319565826;0.0146;0.0015 +2011-03-31;-0.0010473019;8e-04;9e-04 +2011-04-30;0.0284953576;0.0198;4e-04 +2011-05-31;-0.0135009277;0.0371;6e-04 +2011-06-30;-0.0182575082;-0.0227;3e-04 +2011-07-31;-0.0214744366;0.0456;0.001 +2011-08-31;-0.0567910979;0.0997;2e-04 +2011-09-30;-0.071762013;0.1237;2e-04 +2011-10-31;0.1077230383;-0.0449;1e-04 +2011-11-30;-0.0050586452;0.0311;1e-04 +2011-12-31;0.0085327517;0.034;2e-04 From noreply at r-forge.r-project.org Wed Aug 7 19:52:05 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 7 Aug 2013 19:52:05 +0200 (CEST) Subject: [Returnanalytics-commits] r2738 - in pkg/PerformanceAnalytics/sandbox/pulkit: week1/code week3_4/code Message-ID: <20130807175205.61A2D184BDE@r-forge.r-project.org> Author: pulkit Date: 2013-08-07 19:52:05 +0200 (Wed, 07 Aug 2013) New Revision: 2738 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/PSRopt.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/GoldenSection.R Log: documentation changes in PSR and golden section Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/PSRopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/PSRopt.R 2013-08-07 16:33:06 UTC (rev 2737) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/PSRopt.R 2013-08-07 17:52:05 UTC (rev 2738) @@ -1,7 +1,19 @@ #'@title Implementation of PSR Portfolio Optimization #'@description -#'Maximizing for PSR leads to better diversified and more balanced hedge fund allocations compared to the concentrated outcomes of Sharpe ratio maximization.We would like to find the vector of weights that maximize the expression.Gradient Ascent Logic is used to compute the weights using the Function PsrPortfolio +#'Maximizing for PSR leads to better diversified and more balanced hedge fund allocations compared to the concentrated +#'outcomes of Sharpe ratio maximization.We would like to find the vector of weights that maximize the expression #' +#'\deqn{\hat{PSR}(SR^\ast) = Z\biggl[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} +#' +#'where \eqn{\sigma = \sqrt{E[(r-\mu)^2]}} ,its standard deviation.\eqn{\gamma_3=\frac{E\biggl[(r-\mu)^3\biggr]}{\sigma^3}} its skewness, +#'\eqn{\gamma_4=\frac{E\biggl[(r-\mu)^4\biggr]}{\sigma^4}} its kurtosis and \eqn{SR = \frac{\mu}{\sigma}} its Sharpe Ratio. +#'Because \eqn{\hat{PSR}(SR^\ast)=Z[\hat{Z^\ast}]} is a monotonic increasing function of +#'\eqn{\hat{Z^\ast}} ,it suffices to compute the vector that maximizes \eqn{\hat{Z^\ast}} +#' +#'This optimal vector is invariant of the value adopted by the parameter $SR^\ast$. +#'Gradient Ascent Logic is used to compute the weights using the Function PsrPortfolio + + #'@aliases PsrPortfolio #' #'@param R The return series @@ -22,7 +34,18 @@ #'PsrPortfolio(edhec) PsrPortfolio<-function(R,refSR=0,bounds=NULL,MaxIter = 1000,delta = 0.005){ - + # DESCRIPTION: + # This function returns the weight for which Probabilistic Sharpe Ratio + # is maximized. + # + # INPUT: + # The return series of the portfolio is taken as the input + # refSR , bounds of the weights for each series in the portfolio , + # max iteration for the optimization , delta by which the z value will + # change is also taken as the input. + # + # OUTPUT: + # The weights is given as the output. x = checkData(R) columns = ncol(x) n = nrow(x) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/GoldenSection.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/GoldenSection.R 2013-08-07 16:33:06 UTC (rev 2737) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/GoldenSection.R 2013-08-07 17:52:05 UTC (rev 2738) @@ -2,6 +2,19 @@ #' Golden Section Algorithm #' #' @description +#' +#' The Golden Section Search method is used to find the maximum or minimum of a unimodal +#" function. (A unimodal function contains only one minimum or maximum on the interval +#' [a,b].) To make the discussion of the method simpler, let us assume that we are trying to find +#' the maximum of a function. choose three points \eqn{x_l},\eqn{x_1} and \eqn{x_u} \eqn{(x_l \textless x_1 \textless x_u)} +#' along the x-axis with the corresponding values of the function \eqn{f(x_l)},\eqn{f(x_1)} and \eqn{f(x_u)}, respectively. Since +#' \eqn{f(x_1)\textgreater f(x_l)} and \eqn{f(x_1) \textgreater f(x_u)}, the maximum must lie between \eqn{x_l} and \eqn{x_u}. Now +#' a fourth point denoted by \eqn{x_2} is chosen to be between the larger of the two intervals of \eqn{[x_l,x_1]} and \eqn{[x_1,x_u]}/ +#' Assuming that the interval \eqn{[x_l,x_1]} is larger than the interval \eqn{[x_1,x_u]} we would choose \eqn{[x_l,x_1]} as the interval +#' in which \eqn{x_2} is chosen. If \eqn{f(x_2)>f(x_1)} then the new three points would be \eqn{x_l \textless x_2 \textless x_1} else if +#' \eqn{f(x_2) Author: chenyian Date: 2013-08-08 00:47:20 +0200 (Thu, 08 Aug 2013) New Revision: 2739 Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd Log: add standardized factor exposure option. Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-07 17:52:05 UTC (rev 2738) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-07 22:47:20 UTC (rev 2739) @@ -7,12 +7,18 @@ #' etc. to determine the common risk factors. The function creates the class #' "FundamentalFactorModel". #' +#' @details +#' If style factor exposure is standardized to regression-weighted mean zero, this makes +#' style factors orthogonal to the Word factor (intercept term), which in turn facilitted +#' interpretation of the style factor returns. See Menchero 2010. +#' #' The original function was designed by Doug Martin and originally implemented #' in S-PLUS by a number of UW Ph.D. students:Christopher Green, Eric Aldrich, #' and Yindeng Jiang. Guy Yullen re-implemented the function in R and requires #' the following additional R libraries: zoo time series library, robust #' Insightful robust library ported to R and robustbase Basic robust statistics #' package for R +#' #' #' @param data data.frame, data must have \emph{assetvar}, \emph{returnvar}, \emph{datevar} #' , and exposure.names. Generally, data is panel data setup, so it needs firm variabales @@ -31,10 +37,14 @@ #' and location, FALSE for scaling via mean and sd #' @param returnsvar A character string giving the name of the return variable #' in the data. -#' @param datevar A character string giving the name of the date variable in +#' @param datevar A character string gives the name of the date variable in #' the data. -#' @param assetvar A character string giving the name of the asset variable in +#' @param assetvar A character string gives the name of the asset variable in #' the data. +#' @param standardized.factor.exposure logical flag. Factor exposure will be standarized +#' to regression weighted mean 0 and standardized deviation to 1 if \code(TRUE). +#' Default is \code(FALSE). See Detail. +#' @param weight.var. A character strping gives the name of the weight used for standarizing factor exposures. #' @return an S3 object containing #' \itemize{ #' \item returns.cov A "list" object contains covariance information for @@ -60,9 +70,15 @@ #' the data. #' } #' @author Guy Yullen and Yi-An Chen +#' @references +#' \itemize{ +#' \item "The Characteristics of Factor Portfolios", Fall 2010, MENCHERO Jose, +#' Journal of Performance Measurement. +#' } +#' +#' @export #' @examples #' -#' \dontrun{ #' # BARRA type factor model #' data(Stock.df) #' # there are 447 assets @@ -108,150 +124,166 @@ #' #' #' -#' } -#' @export +#' + fitFundamentalFactorModel <- -function(data,exposure.names, datevar, returnsvar, assetvar, - wls = TRUE, regression = "classic", - covariance = "classic", full.resid.cov = FALSE, robust.scale = FALSE) { - -require(xts) -require(robust) - - -assets = unique(data[,assetvar]) -timedates = as.Date(unique(data[,datevar])) - - + function(data,exposure.names, datevar, returnsvar, assetvar, + wls = TRUE, regression = "classic", + covariance = "classic", full.resid.cov = FALSE, robust.scale = FALSE, + standardized.factor.exposure = FALSE, weight.var) { + + require(xts) + require(robust) + + + assets = unique(data[,assetvar]) + timedates = as.Date(unique(data[,datevar])) + + if (length(timedates) < 2) - stop("At least two time points, t and t-1, are needed for fitting the factor model.") + stop("At least two time points, t and t-1, are needed for fitting the factor model.") if (!is(exposure.names, "vector") || !is.character(exposure.names)) - stop("exposure argument invalid---must be character vector.") + stop("exposure argument invalid---must be character vector.") if (!is(assets, "vector") || !is.character(assets)) - stop("assets argument invalid---must be character vector.") - + stop("assets argument invalid---must be character vector.") + wls <- as.logical(wls) full.resid.cov <- as.logical(full.resid.cov) robust.scale <- as.logical(robust.scale) - + standardized.factor.exposure <- as.logical(standardized.factor.exposure) + if (!match(regression, c("robust", "classic"), FALSE)) - stop("regression must one of 'robust', 'classic'.") + stop("regression must one of 'robust', 'classic'.") if (!match(covariance, c("robust", "classic"), FALSE)) - stop("covariance must one of 'robust', 'classic'.") + stop("covariance must one of 'robust', 'classic'.") this.call <- match.call() if (match(returnsvar, exposure.names, FALSE)) - stop(paste(returnsvar, "cannot be used as an exposure.")) + stop(paste(returnsvar, "cannot be used as an exposure.")) numTimePoints <- length(timedates) numExposures <- length(exposure.names) numAssets <- length(assets) - # tickers <- data[1:numAssets,tickersvar] - - - + + + + # check if exposure.names are numeric, if not, create exposures. factors by dummy variables which.numeric <- sapply(data[, exposure.names, drop = FALSE],is.numeric) exposures.numeric <- exposure.names[which.numeric] # industry factor model exposures.factor <- exposure.names[!which.numeric] if (length(exposures.factor) > 1) { - stop("Only one nonnumeric variable can be used at this time.") + stop("Only one nonnumeric variable can be used at this time.") } - + + if (standardized.factor.exposure == TRUE) { + weight = by(data = data, INDICES = as.numeric(data[[datevar]]), + function(x) x[[weight.var]]/sum(x[[weight.var]])) + data[[weight.var]] <- unlist(weight) + + for (i in exposures.numeric) { + standardized.exposure <- by(data = data, INDICES = as.numeric(data[[datevar]]), + function(x) ((x[[i]] - mean(x[[weight.var]]*x[[i]]) )*1/sd(x[[weight.var]]*x[[i]]) )) + data[[i]] <- unlist(standardized.exposure) + } + } + + + + regression.formula <- paste("~", paste(exposure.names, collapse = "+")) # "~ BOOK2MARKET" if (length(exposures.factor)) { - regression.formula <- paste(regression.formula, "- 1") - data[, exposures.factor] <- as.factor(data[, - exposures.factor]) - exposuresToRecode <- names(data[, exposure.names, drop = FALSE])[!which.numeric] - contrasts.list <- lapply(seq(length(exposuresToRecode)), - function(i) function(n, m) contr.treatment(n, contrasts = FALSE)) - names(contrasts.list) <- exposuresToRecode + regression.formula <- paste(regression.formula, "- 1") + data[, exposures.factor] <- as.factor(data[,exposures.factor]) + exposuresToRecode <- names(data[, exposure.names, drop = FALSE])[!which.numeric] + contrasts.list <- lapply(seq(length(exposuresToRecode)), + function(i) function(n, m) contr.treatment(n, contrasts = FALSE)) + names(contrasts.list) <- exposuresToRecode } else { - contrasts.list <- NULL + contrasts.list <- NULL } # turn characters into formula regression.formula <- eval(parse(text = paste(returnsvar,regression.formula))) # RETURN ~ BOOK2MARKET ols.robust <- function(xdf, modelterms, conlist) { - if (length(exposures.factor)) { - zz <- xdf[[exposures.factor]] - xdf[[exposures.factor]] <- if (is.ordered(zz)) - ordered(zz, levels = sort(unique.default(zz))) - else factor(zz) - } - model <- lmRob(modelterms, data = xdf, contrasts = conlist, - control = lmRob.control(mxr = 200, mxf = 200, mxs = 200)) - sdest <- sqrt(diag(model$cov)) - names(sdest) <- names(model$coef) - coefnames <- names(model$coef) - alphaord <- order(coefnames) - model$coef <- model$coef[alphaord] - sdest <- sdest[alphaord] - c(length(model$coef), model$coef, model$coef/sdest, model$resid) + if (length(exposures.factor)) { + zz <- xdf[[exposures.factor]] + xdf[[exposures.factor]] <- if (is.ordered(zz)) + ordered(zz, levels = sort(unique.default(zz))) + else factor(zz) + } + model <- lmRob(modelterms, data = xdf, contrasts = conlist, + control = lmRob.control(mxr = 200, mxf = 200, mxs = 200)) + sdest <- sqrt(diag(model$cov)) + names(sdest) <- names(model$coef) + coefnames <- names(model$coef) + alphaord <- order(coefnames) + model$coef <- model$coef[alphaord] + sdest <- sdest[alphaord] + c(length(model$coef), model$coef, model$coef/sdest, model$resid) } ols.classic <- function(xdf, modelterms, conlist) { - model <- try(lm(formula = modelterms, data = xdf, contrasts = conlist, - singular.ok = FALSE)) - if (is(model, "Error")) { - mess <- geterrmessage() - nn <- regexpr("computed fit is singular", mess) - if (nn > 0) { - cat("At time:", substring(mess, nn), "\n") - model <- lm(formula = modelterms, data = xdf, - contrasts = conlist, singular.ok = TRUE) - } else stop(mess) - } - tstat <- rep(NA, length(model$coef)) - tstat[!is.na(model$coef)] <- summary(model, cor = FALSE)$coef[,3] - alphaord <- order(names(model$coef)) - c(length(model$coef), model$coef[alphaord], tstat[alphaord], - model$resid) + model <- try(lm(formula = modelterms, data = xdf, contrasts = conlist, + singular.ok = FALSE)) + if (is(model, "Error")) { + mess <- geterrmessage() + nn <- regexpr("computed fit is singular", mess) + if (nn > 0) { + cat("At time:", substring(mess, nn), "\n") + model <- lm(formula = modelterms, data = xdf, + contrasts = conlist, singular.ok = TRUE) + } else stop(mess) + } + tstat <- rep(NA, length(model$coef)) + tstat[!is.na(model$coef)] <- summary(model, cor = FALSE)$coef[,3] + alphaord <- order(names(model$coef)) + c(length(model$coef), model$coef[alphaord], tstat[alphaord], + model$resid) } wls.robust <- function(xdf, modelterms, conlist, w) { - assign("w", w, pos = 1) - if (length(exposures.factor)) { - zz <- xdf[[exposures.factor]] - xdf[[exposures.factor]] <- if (is.ordered(zz)) - ordered(zz, levels = sort(unique.default(zz))) - else factor(zz) - } - model <- lmRob(modelterms, data = xdf, weights = w, contrasts = conlist, - control = lmRob.control(mxr = 200, mxf = 200, mxs = 200)) - sdest <- sqrt(diag(model$cov)) - names(sdest) <- names(model$coef) - coefnames <- names(model$coef) - alphaord <- order(coefnames) - model$coef <- model$coef[alphaord] - sdest <- sdest[alphaord] - c(length(model$coef), model$coef, model$coef/sdest, model$resid) + assign("w", w, pos = 1) + if (length(exposures.factor)) { + zz <- xdf[[exposures.factor]] + xdf[[exposures.factor]] <- if (is.ordered(zz)) + ordered(zz, levels = sort(unique.default(zz))) + else factor(zz) + } + model <- lmRob(modelterms, data = xdf, weights = w, contrasts = conlist, + control = lmRob.control(mxr = 200, mxf = 200, mxs = 200)) + sdest <- sqrt(diag(model$cov)) + names(sdest) <- names(model$coef) + coefnames <- names(model$coef) + alphaord <- order(coefnames) + model$coef <- model$coef[alphaord] + sdest <- sdest[alphaord] + c(length(model$coef), model$coef, model$coef/sdest, model$resid) } wls.classic <- function(xdf, modelterms, conlist, w) { - assign("w", w, pos = 1) - model <- try(lm(formula = modelterms, data = xdf, contrasts = conlist, - weights = w, singular.ok = FALSE)) - if (is(model, "Error")) { - mess <- geterrmessage() - nn <- regexpr("computed fit is singular", mess) - if (nn > 0) { - cat("At time:", substring(mess, nn), "\n") - model <- lm(formula = modelterms, data = xdf, - contrasts = conlist, weights = w) - } - else stop(mess) + assign("w", w, pos = 1) + model <- try(lm(formula = modelterms, data = xdf, contrasts = conlist, + weights = w, singular.ok = FALSE)) + if (is(model, "Error")) { + mess <- geterrmessage() + nn <- regexpr("computed fit is singular", mess) + if (nn > 0) { + cat("At time:", substring(mess, nn), "\n") + model <- lm(formula = modelterms, data = xdf, + contrasts = conlist, weights = w) } - tstat <- rep(NA, length(model$coef)) - tstat[!is.na(model$coef)] <- summary(model, cor = FALSE)$coef[, - 3] - alphaord <- order(names(model$coef)) - c(length(model$coef), model$coef[alphaord], tstat[alphaord], - model$resid) + else stop(mess) + } + tstat <- rep(NA, length(model$coef)) + tstat[!is.na(model$coef)] <- summary(model, cor = FALSE)$coef[, + 3] + alphaord <- order(names(model$coef)) + c(length(model$coef), model$coef[alphaord], tstat[alphaord], + model$resid) } # FE.hat has T elements # every element t contains @@ -260,82 +292,82 @@ # 3. t value of estimated factors # 4. residuals at time t if (!wls) { - if (regression == "robust") { - # ols.robust - FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), - FUN = ols.robust, modelterms = regression.formula, - conlist = contrasts.list) - } else { - # ols.classic - FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), - FUN = ols.classic, modelterms = regression.formula, - conlist = contrasts.list) - } + if (regression == "robust") { + # ols.robust + FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), + FUN = ols.robust, modelterms = regression.formula, + conlist = contrasts.list) + } else { + # ols.classic + FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), + FUN = ols.classic, modelterms = regression.formula, + conlist = contrasts.list) + } } else { - if (regression == "robust") { - # wls.robust - resids <- by(data = data, INDICES = as.numeric(data[[datevar]]), - FUN = function(xdf, modelterms, conlist) { - lmRob(modelterms, data = xdf, contrasts = conlist, - control = lmRob.control(mxr = 200, mxf = 200, - mxs = 200))$resid - }, modelterms = regression.formula, conlist = contrasts.list) - resids <- apply(resids, 1, unlist) - weights <- if (covariance == "robust") - apply(resids, 1, scaleTau2)^2 - else apply(resids, 1, var) - FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), - FUN = wls.robust, modelterms = regression.formula, - conlist = contrasts.list, w = weights) - } else { - # wls.classic - resids <- by(data = data, INDICES = as.numeric(data[[datevar]]), - FUN = function(xdf, modelterms, conlist) { - lm(formula = modelterms, data = xdf, contrasts = conlist, - singular.ok = TRUE)$resid - }, - modelterms = regression.formula, conlist = contrasts.list) - resids <- apply(resids, 1, unlist) - weights <- if (covariance == "robust") - apply(resids, 1, scaleTau2)^2 - else apply(resids, 1, var) - FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), - FUN = wls.classic, modelterms = regression.formula, - conlist = contrasts.list, w = weights) - } + if (regression == "robust") { + # wls.robust + resids <- by(data = data, INDICES = as.numeric(data[[datevar]]), + FUN = function(xdf, modelterms, conlist) { + lmRob(modelterms, data = xdf, contrasts = conlist, + control = lmRob.control(mxr = 200, mxf = 200, + mxs = 200))$resid + }, modelterms = regression.formula, conlist = contrasts.list) + resids <- apply(resids, 1, unlist) + weights <- if (covariance == "robust") + apply(resids, 1, scaleTau2)^2 + else apply(resids, 1, var) + FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), + FUN = wls.robust, modelterms = regression.formula, + conlist = contrasts.list, w = weights) + } else { + # wls.classic + resids <- by(data = data, INDICES = as.numeric(data[[datevar]]), + FUN = function(xdf, modelterms, conlist) { + lm(formula = modelterms, data = xdf, contrasts = conlist, + singular.ok = TRUE)$resid + }, + modelterms = regression.formula, conlist = contrasts.list) + resids <- apply(resids, 1, unlist) + weights <- if (covariance == "robust") + apply(resids, 1, scaleTau2)^2 + else apply(resids, 1, var) + FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), + FUN = wls.classic, modelterms = regression.formula, + conlist = contrasts.list, w = weights) + } } # if there is industry dummy variables if (length(exposures.factor)) { - numCoefs <- length(exposures.numeric) + length(levels(data[, - exposures.factor])) - ncols <- 1 + 2 * numCoefs + numAssets - fnames <- c(exposures.numeric, paste(exposures.factor, - levels(data[, exposures.factor]), sep = "")) - cnames <- c("numCoefs", fnames, paste("t", fnames, sep = "."), - assets) + numCoefs <- length(exposures.numeric) + length(levels(data[, + exposures.factor])) + ncols <- 1 + 2 * numCoefs + numAssets + fnames <- c(exposures.numeric, paste(exposures.factor, + levels(data[, exposures.factor]), sep = "")) + cnames <- c("numCoefs", fnames, paste("t", fnames, sep = "."), + assets) } else { - numCoefs <- 1 + length(exposures.numeric) - ncols <- 1 + 2 * numCoefs + numAssets - cnames <- c("numCoefs", "(Intercept)", exposures.numeric, - paste("t", c("(Intercept)", exposures.numeric), sep = "."), - assets) + numCoefs <- 1 + length(exposures.numeric) + ncols <- 1 + 2 * numCoefs + numAssets + cnames <- c("numCoefs", "(Intercept)", exposures.numeric, + paste("t", c("(Intercept)", exposures.numeric), sep = "."), + assets) } - -# create matrix for fit + + # create matrix for fit FE.hat.mat <- matrix(NA, ncol = ncols, nrow = numTimePoints, dimnames = list(as.character(as.Date(as.numeric(names(FE.hat)), origin = "1970-01-01")), - cnames)) + cnames)) # give each element t names for (i in 1:length(FE.hat)) { - names(FE.hat[[i]])[1] <- "numCoefs" - nc <- FE.hat[[i]][1] - names(FE.hat[[i]])[(2 + nc):(1 + 2 * nc)] <- paste("t", - names(FE.hat[[i]])[2:(1 + nc)], sep = ".") - if (length(FE.hat[[i]]) != (1 + 2 * nc + numAssets)) - stop(paste("bad count in row", i, "of FE.hat")) - names(FE.hat[[i]])[(2 + 2 * nc):(1 + 2 * nc + numAssets)] <- assets - idx <- match(names(FE.hat[[i]]), colnames(FE.hat.mat)) - FE.hat.mat[i, idx] <- FE.hat[[i]] + names(FE.hat[[i]])[1] <- "numCoefs" + nc <- FE.hat[[i]][1] + names(FE.hat[[i]])[(2 + nc):(1 + 2 * nc)] <- paste("t", + names(FE.hat[[i]])[2:(1 + nc)], sep = ".") + if (length(FE.hat[[i]]) != (1 + 2 * nc + numAssets)) + stop(paste("bad count in row", i, "of FE.hat")) + names(FE.hat[[i]])[(2 + 2 * nc):(1 + 2 * nc + numAssets)] <- assets + idx <- match(names(FE.hat[[i]]), colnames(FE.hat.mat)) + FE.hat.mat[i, idx] <- FE.hat[[i]] } # give back the names of timedates timedates <- as.Date(as.numeric(dimnames(FE.hat)[[1]]), origin = "1970-01-01") @@ -344,36 +376,36 @@ f.hat <- xts(x = FE.hat.mat[, 2:(1 + numCoefs)], order.by = timedates) # check for outlier gomat <- apply(coredata(f.hat), 2, function(x) abs(x - median(x, - na.rm = TRUE)) > 4 * mad(x, na.rm = TRUE)) + na.rm = TRUE)) > 4 * mad(x, na.rm = TRUE)) if (any(gomat, na.rm = TRUE) ) { - cat("\n\n*** Possible outliers found in the factor returns:\n\n") - for (i in which(apply(gomat, 1, any, na.rm = TRUE))) print(f.hat[i, - gomat[i, ], drop = FALSE]) + cat("\n\n*** Possible outliers found in the factor returns:\n\n") + for (i in which(apply(gomat, 1, any, na.rm = TRUE))) print(f.hat[i, + gomat[i, ], drop = FALSE]) } tstats <- xts(x = FE.hat.mat[, (2 + nc):(1 + 2 * nc)], order.by = timedates) # residuals for every asset ordered by time resids <- xts(x = FE.hat.mat[, (2 + 2 * numCoefs):(1 + 2 * - numCoefs + numAssets)], order.by = timedates) - -if (covariance == "robust") { - if (kappa(na.exclude(coredata(f.hat))) < 1e+10) { - Cov.factors <- covRob(coredata(f.hat), estim = "pairwiseGK", - distance = FALSE, na.action = na.omit) - } else { - cat("Covariance matrix of factor returns is singular.\n") - Cov.factors <- covRob(coredata(f.hat), distance = FALSE, - na.action = na.omit) - } - resid.vars <- apply(coredata(resids), 2, scaleTau2, na.rm = T)^2 - D.hat <- if (full.resid.cov) - covOGK(coredata(resids), sigmamu = scaleTau2, n.iter = 1) - else - diag(resid.vars) + numCoefs + numAssets)], order.by = timedates) + + if (covariance == "robust") { + if (kappa(na.exclude(coredata(f.hat))) < 1e+10) { + Cov.factors <- covRob(coredata(f.hat), estim = "pairwiseGK", + distance = FALSE, na.action = na.omit) + } else { + cat("Covariance matrix of factor returns is singular.\n") + Cov.factors <- covRob(coredata(f.hat), distance = FALSE, + na.action = na.omit) + } + resid.vars <- apply(coredata(resids), 2, scaleTau2, na.rm = T)^2 + D.hat <- if (full.resid.cov) + covOGK(coredata(resids), sigmamu = scaleTau2, n.iter = 1) + else + diag(resid.vars) } else { - Cov.factors <- covClassic(coredata(f.hat), distance = FALSE,na.action = na.omit) - resid.vars <- apply(coredata(resids), 2, var, na.rm = TRUE) - D.hat <- if (full.resid.cov) { - covClassic(coredata(resids), distance = FALSE, na.action = na.omit) + Cov.factors <- covClassic(coredata(f.hat), distance = FALSE,na.action = na.omit) + resid.vars <- apply(coredata(resids), 2, var, na.rm = TRUE) + D.hat <- if (full.resid.cov) { + covClassic(coredata(resids), distance = FALSE, na.action = na.omit) } else { diag(resid.vars) } } # create betas from origial database @@ -381,40 +413,40 @@ colnames <- coefs.names B.final[, match("(Intercept)", colnames, 0)] <- 1 numeric.columns <- match(exposures.numeric, colnames, 0) -# only take the latest beta to compute FM covariance -# should we let user choose which beta to use ? + # only take the latest beta to compute FM covariance + # should we let user choose which beta to use ? B.final[, numeric.columns] <- as.matrix(data[ (as.numeric(data[[datevar]]) == - timedates[numTimePoints]), exposures.numeric]) + timedates[numTimePoints]), exposures.numeric]) rownames(B.final) = assets colnames(B.final) = colnames(f.hat) if (length(exposures.factor)) { - B.final[, grep(exposures.factor, x = colnames)][cbind(seq(numAssets), - as.numeric(data[data[[datevar]] == timedates[numTimePoints], - exposures.factor]))] <- 1 + B.final[, grep(exposures.factor, x = colnames)][cbind(seq(numAssets), + as.numeric(data[data[[datevar]] == timedates[numTimePoints], + exposures.factor]))] <- 1 } cov.returns <- B.final %*% Cov.factors$cov %*% t(B.final) + - if (full.resid.cov) { D.hat$cov - } else { D.hat } + if (full.resid.cov) { D.hat$cov + } else { D.hat } mean.cov.returns = tapply(data[[returnsvar]],data[[assetvar]], mean) Cov.returns <- list(cov = cov.returns, mean=mean.cov.returns, eigenvalues = eigen(cov.returns, - only.values = TRUE, symmetric = TRUE)$values) - -# report residual covaraince if full.resid.cov is true. -if (full.resid.cov) { - Cov.resids <- D.hat - } + only.values = TRUE, symmetric = TRUE)$values) + + # report residual covaraince if full.resid.cov is true. + if (full.resid.cov) { + Cov.resids <- D.hat + } else { Cov.resids <- NULL } -# -# # r-square for each asset = 1 - SSE/SST -# SSE <- apply(fit.fund$residuals^2,2,sum) -# SST <- tapply(data[,returnsvar],data[,assetvar],function(x) sum((x-mean(x))^2)) -# r2 <- 1- SSE/SST - -# change names for intercept -colnames(f.hat)[1] <- "Intercept" - + # + # # r-square for each asset = 1 - SSE/SST + # SSE <- apply(fit.fund$residuals^2,2,sum) + # SST <- tapply(data[,returnsvar],data[,assetvar],function(x) sum((x-mean(x))^2)) + # r2 <- 1- SSE/SST + + # change names for intercept + colnames(f.hat)[1] <- "Intercept" + output <- list(returns.cov = Cov.returns, factor.cov = Cov.factors, resids.cov = Cov.resids, @@ -432,4 +464,4 @@ exposure.names = exposure.names) class(output) <- "FundamentalFactorModel" return(output) -} \ No newline at end of file + } \ No newline at end of file Modified: pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-07 17:52:05 UTC (rev 2738) +++ pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-07 22:47:20 UTC (rev 2739) @@ -5,7 +5,8 @@ fitFundamentalFactorModel(data, exposure.names, datevar, returnsvar, assetvar, wls = TRUE, regression = "classic", covariance = "classic", - full.resid.cov = FALSE, robust.scale = FALSE) + full.resid.cov = FALSE, robust.scale = FALSE, + standardized.factor.exposure = FALSE, weight.var) } \arguments{ \item{data}{data.frame, data must have \emph{assetvar}, @@ -37,11 +38,19 @@ \item{returnsvar}{A character string giving the name of the return variable in the data.} - \item{datevar}{A character string giving the name of the + \item{datevar}{A character string gives the name of the date variable in the data.} - \item{assetvar}{A character string giving the name of the + \item{assetvar}{A character string gives the name of the asset variable in the data.} + + \item{standardized.factor.exposure}{logical flag. Factor + exposure will be standarized to regression weighted mean + 0 and standardized deviation to 1 if \code(TRUE). Default + is \code(FALSE). See Detail.} + + \item{weight.var.}{A character strping gives the name of + the weight used for standarizing factor exposures.} } \value{ an S3 object containing \itemize{ \item returns.cov A @@ -78,6 +87,12 @@ "FundamentalFactorModel". } \details{ + If style factor exposure is standardized to + regression-weighted mean zero, this makes style factors + orthogonal to the Word factor (intercept term), which in + turn facilitted interpretation of the style factor + returns. See Menchero 2010. + The original function was designed by Doug Martin and originally implemented in S-PLUS by a number of UW Ph.D. students:Christopher Green, Eric Aldrich, and Yindeng @@ -87,7 +102,6 @@ to R and robustbase Basic robust statistics package for R } \examples{ -\dontrun{ # BARRA type factor model data(Stock.df) # there are 447 assets @@ -130,12 +144,13 @@ test.fit2$resids test.fit2$tstats test.fit2$call - - - } -} \author{ Guy Yullen and Yi-An Chen } +\references{ + \itemize{ \item "The Characteristics of Factor + Portfolios", Fall 2010, MENCHERO Jose, Journal of + Performance Measurement. } +} From noreply at r-forge.r-project.org Thu Aug 8 12:24:51 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 8 Aug 2013 12:24:51 +0200 (CEST) Subject: [Returnanalytics-commits] r2740 - in pkg/Meucci: R demo Message-ID: <20130808102451.88C751845BE@r-forge.r-project.org> Author: xavierv Date: 2013-08-08 12:24:51 +0200 (Thu, 08 Aug 2013) New Revision: 2740 Added: pkg/Meucci/demo/S_MultiVarSqrRootRule.R Modified: pkg/Meucci/R/TwoDimEllipsoid.R pkg/Meucci/demo/S_CrossSectionIndustries.R Log: - added S_MultiVarSqrRootRule demo script from chapter 3 and fixed function TwoDimEllipsoid Modified: pkg/Meucci/R/TwoDimEllipsoid.R =================================================================== --- pkg/Meucci/R/TwoDimEllipsoid.R 2013-08-07 22:47:20 UTC (rev 2739) +++ pkg/Meucci/R/TwoDimEllipsoid.R 2013-08-08 10:24:51 UTC (rev 2740) @@ -32,8 +32,8 @@ for( i in 1 : NumSteps ) { # normalized variables (parametric representation of the ellipsoid) - y = c( cos( Angle[ i ] ), sin( Angle[ i ] ) ); - Centered_Ellipse = c( Centered_Ellipse, Eigen$vectors %*% diag(sqrt(Eigen$values)) %*% y ); ##ok + y = rbind( cos( Angle[ i ] ), sin( Angle[ i ] ) ); + Centered_Ellipse = c( Centered_Ellipse, Eigen$vectors %*% diag(sqrt(Eigen$values), length(Eigen$values)) %*% y ); ##ok } R = Location %*% array( 1, NumSteps ) + Scale * Centered_Ellipse; Modified: pkg/Meucci/demo/S_CrossSectionIndustries.R =================================================================== --- pkg/Meucci/demo/S_CrossSectionIndustries.R 2013-08-07 22:47:20 UTC (rev 2739) +++ pkg/Meucci/demo/S_CrossSectionIndustries.R 2013-08-08 10:24:51 UTC (rev 2740) @@ -10,7 +10,7 @@ ################################################################################################################## ### Load data # loads weekly stock returns X and indices stock returns F -load(" ../data/securitiesTS.Rda"); +load("../data/securitiesTS.Rda"); Data_Securities = securitiesTS$data[ , -1 ]; # 1st column is date load("../data/securitiesIndustryClassification.Rda"); Added: pkg/Meucci/demo/S_MultiVarSqrRootRule.R =================================================================== --- pkg/Meucci/demo/S_MultiVarSqrRootRule.R (rev 0) +++ pkg/Meucci/demo/S_MultiVarSqrRootRule.R 2013-08-08 10:24:51 UTC (rev 2740) @@ -0,0 +1,69 @@ +#' This script illustrates the multivariate square root rule-of-thumb +#' Described in A. Meucci,"Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_MultiVarSqrRootRule.m" +# +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Load data +load("../data/swaps.Rda"); + +################################################################################################################## +### Aggregation steps in days +Steps = c( 1, 5, 22 ); + +################################################################################################################## +### Plots +Agg = list(); +#names(Agg)=c( "M_hat" , "S_hat", "M_norm", "S_norm"); + +dev.new(); +plot( swaps$X[ , 1 ],swaps$X[ , 2], xlab = swaps$Names[[1]][1], ylab = swaps$Names[[2]][1] ); +T = nrow(swaps$X ); + +for( s in 1 : length(Steps) ) +{ + + + + # compute series at aggregated time steps + k = Steps[ s ]; + AggX = NULL; + t = 1; + while( ( t + k + 1 ) <= T ) + { + NewTerm = apply( matrix(swaps$X[ t : (t+k-1), ], ,ncol(swaps$X) ),2,sum); + AggX = rbind( AggX, NewTerm ); ##ok + t = t + k; + } + + + # empirical mean/covariance + + if(s==1) + { + M_hat = matrix(apply(AggX, 2, mean)); + S_hat = cov(AggX); + Agg[[s]] = list( M_hat = M_hat, S_hat = S_hat, M_norm = k / Steps[ 1 ] * M_hat, S_norm = k / Steps[ 1 ] * S_hat ); + }else + { + + Agg[[s]] = list( + M_hat = matrix(apply(AggX,2,mean)), + S_hat = cov(AggX), + # mean/covariance implied by propagation law of risk for invariants + M_norm = k / Steps[ 1 ] * Agg[[ 1 ]]$M_hat, + S_norm = k / Steps[ 1 ] * Agg[[ 1 ]]$S_hat + ); + } + + # plots + h1 = TwoDimEllipsoid( Agg[[ s ]]$M_norm, Agg[[ s ]]$S_norm, 1, 0, 0 ); + + h2 = TwoDimEllipsoid( Agg[[ s ]]$M_hat, Agg[[ s ]]$S_hat, 1, 0, 0 ); + + +} From noreply at r-forge.r-project.org Thu Aug 8 19:52:59 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 8 Aug 2013 19:52:59 +0200 (CEST) Subject: [Returnanalytics-commits] r2741 - in pkg/FactorAnalytics: . R man Message-ID: <20130808175259.9F7E9184880@r-forge.r-project.org> Author: chenyian Date: 2013-08-08 19:52:59 +0200 (Thu, 08 Aug 2013) New Revision: 2741 Added: pkg/FactorAnalytics/man/CornishFisher.Rd Removed: pkg/FactorAnalytics/man/rCornishFisher.Rd Modified: pkg/FactorAnalytics/NAMESPACE pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/R/rCornishFisher.R pkg/FactorAnalytics/man/ pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd Log: debug CornishFisher.Rd document Modified: pkg/FactorAnalytics/NAMESPACE =================================================================== --- pkg/FactorAnalytics/NAMESPACE 2013-08-08 10:24:51 UTC (rev 2740) +++ pkg/FactorAnalytics/NAMESPACE 2013-08-08 17:52:59 UTC (rev 2741) @@ -1,3 +1,4 @@ +export(CornishFisher) export(dCornishFisher) export(factorModelCovariance) export(factorModelEsDecomposition) @@ -9,7 +10,6 @@ export(fitTimeSeriesFactorModel) export(pCornishFisher) export(qCornishFisher) -export(rCornishFisher) S3method(plot,FundamentalFactorModel) S3method(plot,StatFactorModel) S3method(plot,TimeSeriesFactorModel) Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-08 10:24:51 UTC (rev 2740) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-08 17:52:59 UTC (rev 2741) @@ -42,9 +42,9 @@ #' @param assetvar A character string gives the name of the asset variable in #' the data. #' @param standardized.factor.exposure logical flag. Factor exposure will be standarized -#' to regression weighted mean 0 and standardized deviation to 1 if \code(TRUE). -#' Default is \code(FALSE). See Detail. -#' @param weight.var. A character strping gives the name of the weight used for standarizing factor exposures. +#' to regression weighted mean 0 and standardized deviation to 1 if \code{TRUE}. +#' Default is \code{FALSE}. See Detail. +#' @param weight.var A character strping gives the name of the weight used for standarizing factor exposures. #' @return an S3 object containing #' \itemize{ #' \item returns.cov A "list" object contains covariance information for Modified: pkg/FactorAnalytics/R/rCornishFisher.R =================================================================== --- pkg/FactorAnalytics/R/rCornishFisher.R 2013-08-08 10:24:51 UTC (rev 2740) +++ pkg/FactorAnalytics/R/rCornishFisher.R 2013-08-08 17:52:59 UTC (rev 2741) @@ -1,6 +1,8 @@ #' Functions for Cornish-Fisher density, CDF, random number simulation and #' quantile. #' +#'@name CornishFisher +#'@aliases CornishFisher #'@aliases rCornishFisher #'@aliases dCornishFisher #'@aliases qCornishFisher @@ -46,6 +48,7 @@ #' @details CDF(q) = Pr(sqrt(n)*(x_bar-mu)/sigma < q) #' #' @examples +#' \dontrun{ #' # generate 1000 observation from Cornish-Fisher distribution #' rc <- rCornishFisher(1000,1,0,5) #'hist(rc,breaks=100,freq=FALSE,main="simulation of Cornish Fisher Distribution", @@ -63,7 +66,7 @@ #' pnorm(q) #' # use edgeworth expansion #' pCornishFisher(q,n=5,skew=2,ekurt=6) -#' +#' } Property changes on: pkg/FactorAnalytics/man ___________________________________________________________________ Modified: svn:ignore - CornishFisher.Rd Style.Rd covEWMA.Rd factorModelPerformanceAttribution.Rd impliedFactorReturns.Rd modifiedEsReport.Rd modifiedIncrementalES.Rd modifiedIncrementalVaR.Rd modifiedPortfolioEsDecomposition.Rd modifiedPortfolioVaRDecomposition.Rd modifiedVaRReport.Rd nonparametricEsReport.Rd nonparametricIncrementalES.Rd nonparametricIncrementalVaR.Rd nonparametricPortfolioEsDecomposition.Rd nonparametricPortfolioVaRDecomposition.Rd nonparametricVaRReport.Rd normalEsReport.Rd normalIncrementalES.Rd normalIncrementalVaR.Rd normalPortfolioEsDecomposition.Rd normalPortfolioVaRDecomposition.Rd normalVaRReport.Rd plot.FM.attribution.Rd plot.MacroFactorModel.Rd portfolioSdDecomposition.Rd print.MacroFactorModel.Rd scenarioPredictions.Rd scenarioPredictionsPortfolio.Rd stock.Rd summary.FM.attribution.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd + CornishFisher.Rd Style.Rd covEWMA.Rd factorModelPerformanceAttribution.Rd impliedFactorReturns.Rd modifiedEsReport.Rd modifiedIncrementalES.Rd modifiedIncrementalVaR.Rd modifiedPortfolioEsDecomposition.Rd modifiedPortfolioVaRDecomposition.Rd modifiedVaRReport.Rd nonparametricEsReport.Rd nonparametricIncrementalES.Rd nonparametricIncrementalVaR.Rd nonparametricPortfolioEsDecomposition.Rd nonparametricPortfolioVaRDecomposition.Rd nonparametricVaRReport.Rd normalEsReport.Rd normalIncrementalES.Rd normalIncrementalVaR.Rd normalPortfolioEsDecomposition.Rd normalPortfolioVaRDecomposition.Rd normalVaRReport.Rd plot.FM.attribution.Rd plot.MacroFactorModel.Rd portfolioSdDecomposition.Rd print.MacroFactorModel.Rd rCornishFisher.Rd scenarioPredictions.Rd scenarioPredictionsPortfolio.Rd stock.Rd summary.FM.attribution.Rd summary.MacroFactorModel.Rd summary.TimeSeriesModel.Rd Added: pkg/FactorAnalytics/man/CornishFisher.Rd =================================================================== --- pkg/FactorAnalytics/man/CornishFisher.Rd (rev 0) +++ pkg/FactorAnalytics/man/CornishFisher.Rd 2013-08-08 17:52:59 UTC (rev 2741) @@ -0,0 +1,80 @@ +\name{CornishFisher} +\alias{CornishFisher} +\alias{dCornishFisher} +\alias{pCornishFisher} +\alias{qCornishFisher} +\alias{rCornishFisher} +\title{Functions for Cornish-Fisher density, CDF, random number simulation and +quantile.} +\usage{ + rCornishFisher(n, sigma, skew, ekurt, seed = NULL) +} +\arguments{ + \item{n}{scalar, number of simulated values in + rCornishFisher. Sample length in + density,distribution,quantile function.} + + \item{sigma}{scalar, standard deviation.} + + \item{skew}{scalar, skewness.} + + \item{ekurt}{scalar, excess kurtosis.} + + \item{seed}{set seed here. Default is \code{NULL}.} + + \item{x,q}{vector of standardized quantiles. See detail.} + + \item{p}{vector of probabilities.} +} +\value{ + n simulated values from Cornish-Fisher distribution. +} +\description{ + \itemize{ \item \code{rCornishFisher} simulate + observations based on Cornish-Fisher quantile expansion + given mean, standard deviation, skewness and excess + kurtosis. \item \code{dCornishFisher} Computes + Cornish-Fisher density from two term Edgeworth expansion + given mean, standard deviation, skewness and excess + kurtosis. \item \code{pCornishFisher} Computes + Cornish-Fisher CDF from two term Edgeworth expansion + given mean, standard deviation, skewness and excess + kurtosis. \item \code{qCornishFisher} Computes + Cornish-Fisher quantiles from two term Edgeworth + expansion given mean, standard deviation, skewness and + excess kurtosis. } +} +\details{ + CDF(q) = Pr(sqrt(n)*(x_bar-mu)/sigma < q) +} +\examples{ +\dontrun{ + # generate 1000 observation from Cornish-Fisher distribution +rc <- rCornishFisher(1000,1,0,5) +hist(rc,breaks=100,freq=FALSE,main="simulation of Cornish Fisher Distribution", + xlim=c(-10,10)) +lines(seq(-10,10,0.1),dnorm(seq(-10,10,0.1),mean=0,sd=1),col=2) +# compare with standard normal curve + +# example from A.dasGupta p.188 exponential example +# x is iid exp(1) distribution, sample size = 5 +# then x_bar is Gamma(shape=5,scale=1/5) distribution +q <- c(0,0.4,1,2) +# exact cdf +pgamma(q/sqrt(5)+1,shape=5,scale=1/5) +# use CLT +pnorm(q) +# use edgeworth expansion +pCornishFisher(q,n=5,skew=2,ekurt=6) +} +} +\author{ + Eric Zivot and Yi-An Chen. +} +\references{ + \itemize{ \item A.DasGupta, "Asymptotic Theory of + Statistics and Probability", Springer Science+Business + Media,LLC 2008 \item Thomas A.Severini, "Likelihood + Methods in Statistics", Oxford University Press, 2000 } +} + Modified: pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-08 10:24:51 UTC (rev 2740) +++ pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-08 17:52:59 UTC (rev 2741) @@ -46,10 +46,10 @@ \item{standardized.factor.exposure}{logical flag. Factor exposure will be standarized to regression weighted mean - 0 and standardized deviation to 1 if \code(TRUE). Default - is \code(FALSE). See Detail.} + 0 and standardized deviation to 1 if \code{TRUE}. Default + is \code{FALSE}. See Detail.} - \item{weight.var.}{A character strping gives the name of + \item{weight.var}{A character strping gives the name of the weight used for standarizing factor exposures.} } \value{ Deleted: pkg/FactorAnalytics/man/rCornishFisher.Rd =================================================================== --- pkg/FactorAnalytics/man/rCornishFisher.Rd 2013-08-08 10:24:51 UTC (rev 2740) +++ pkg/FactorAnalytics/man/rCornishFisher.Rd 2013-08-08 17:52:59 UTC (rev 2741) @@ -1,77 +0,0 @@ -\name{rCornishFisher} -\alias{dCornishFisher} -\alias{pCornishFisher} -\alias{qCornishFisher} -\alias{rCornishFisher} -\title{Functions for Cornish-Fisher density, CDF, random number simulation and -quantile.} -\usage{ - rCornishFisher(n, sigma, skew, ekurt, seed = NULL) -} -\arguments{ - \item{n}{scalar, number of simulated values in - rCornishFisher. Sample length in - density,distribution,quantile function.} - - \item{sigma}{scalar, standard deviation.} - - \item{skew}{scalar, skewness.} - - \item{ekurt}{scalar, excess kurtosis.} - - \item{seed}{set seed here. Default is \code{NULL}.} - - \item{x,q}{vector of standardized quantiles. See detail.} - - \item{p}{vector of probabilities.} -} -\value{ - n simulated values from Cornish-Fisher distribution. -} -\description{ - \itemize{ \item \code{rCornishFisher} simulate - observations based on Cornish-Fisher quantile expansion - given mean, standard deviation, skewness and excess - kurtosis. \item \code{dCornishFisher} Computes - Cornish-Fisher density from two term Edgeworth expansion - given mean, standard deviation, skewness and excess - kurtosis. \item \code{pCornishFisher} Computes - Cornish-Fisher CDF from two term Edgeworth expansion - given mean, standard deviation, skewness and excess - kurtosis. \item \code{qCornishFisher} Computes - Cornish-Fisher quantiles from two term Edgeworth - expansion given mean, standard deviation, skewness and - excess kurtosis. } -} -\details{ - CDF(q) = Pr(sqrt(n)*(x_bar-mu)/sigma < q) -} -\examples{ -# generate 1000 observation from Cornish-Fisher distribution -rc <- rCornishFisher(1000,1,0,5) -hist(rc,breaks=100,freq=FALSE,main="simulation of Cornish Fisher Distribution", - xlim=c(-10,10)) -lines(seq(-10,10,0.1),dnorm(seq(-10,10,0.1),mean=0,sd=1),col=2) -# compare with standard normal curve - -# example from A.dasGupta p.188 exponential example -# x is iid exp(1) distribution, sample size = 5 -# then x_bar is Gamma(shape=5,scale=1/5) distribution -q <- c(0,0.4,1,2) -# exact cdf -pgamma(q/sqrt(5)+1,shape=5,scale=1/5) -# use CLT -pnorm(q) -# use edgeworth expansion -pCornishFisher(q,n=5,skew=2,ekurt=6) -} -\author{ - Eric Zivot and Yi-An Chen. -} -\references{ - \itemize{ \item A.DasGupta, "Asymptotic Theory of - Statistics and Probability", Springer Science+Business - Media,LLC 2008 \item Thomas A.Severini, "Likelihood - Methods in Statistics", Oxford University Press, 2000 } -} - From noreply at r-forge.r-project.org Thu Aug 8 20:10:46 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 8 Aug 2013 20:10:46 +0200 (CEST) Subject: [Returnanalytics-commits] r2742 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130808181046.D38E1184880@r-forge.r-project.org> Author: pulkit Date: 2013-08-08 20:10:46 +0200 (Thu, 08 Aug 2013) New Revision: 2742 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/data.csv Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R Log: Data For testing Drawdown beta Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-08 17:52:59 UTC (rev 2741) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R 2013-08-08 18:10:46 UTC (rev 2742) @@ -83,6 +83,7 @@ DDbeta<-function(x){ q = NULL q_quantile = quantile(drawdowns_m,1-p) + print(drawdowns_m) for(i in 1:nrow(Rm)){ if(drawdowns_m[i] Author: chenyian Date: 2013-08-08 20:11:07 +0200 (Thu, 08 Aug 2013) New Revision: 2743 Added: pkg/FactorAnalytics/R/dCornishFisher.R pkg/FactorAnalytics/R/pCornishFisher.R pkg/FactorAnalytics/R/qCornishFisher.R Modified: pkg/FactorAnalytics/NAMESPACE pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/man/CornishFisher.Rd pkg/FactorAnalytics/man/Stock.df.Rd pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd Log: debug rd files to pass R CMD check Modified: pkg/FactorAnalytics/NAMESPACE =================================================================== --- pkg/FactorAnalytics/NAMESPACE 2013-08-08 18:10:46 UTC (rev 2742) +++ pkg/FactorAnalytics/NAMESPACE 2013-08-08 18:11:07 UTC (rev 2743) @@ -1,4 +1,3 @@ -export(CornishFisher) export(dCornishFisher) export(factorModelCovariance) export(factorModelEsDecomposition) @@ -10,6 +9,7 @@ export(fitTimeSeriesFactorModel) export(pCornishFisher) export(qCornishFisher) +export(rCornishFisher) S3method(plot,FundamentalFactorModel) S3method(plot,StatFactorModel) S3method(plot,TimeSeriesFactorModel) Added: pkg/FactorAnalytics/R/dCornishFisher.R =================================================================== --- pkg/FactorAnalytics/R/dCornishFisher.R (rev 0) +++ pkg/FactorAnalytics/R/dCornishFisher.R 2013-08-08 18:11:07 UTC (rev 2743) @@ -0,0 +1,15 @@ +#'@name CornishFisher +#'@aliases CornishFisher +#'@aliases rCornishFisher +#'@aliases dCornishFisher +#'@aliases qCornishFisher +#'@aliases pCornishFisher +#' @export +dCornishFisher <- +function(x, n,skew, ekurt) { + +density <- dnorm(x) + 1/sqrt(n)*(skew/6*(x^3-3*x))*dnorm(x) + + 1/n *( (skew)^2/72*(x^6 - 15*x^4 + 45*x^2 -15) + ekurt/24 *(x^4-6*x^2+3) )*dnorm(x) +return(density) +} + Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-08 18:10:46 UTC (rev 2742) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-08 18:11:07 UTC (rev 2743) @@ -83,7 +83,7 @@ #' data(Stock.df) #' # there are 447 assets #' exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") -#' test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, +#' test.fit <- fitFundamentalFactorModel(data=stock,exposure.names=exposure.names, #' datevar = "DATE", returnsvar = "RETURN", #' assetvar = "TICKER", wls = TRUE, #' regression = "classic", @@ -104,7 +104,7 @@ #' # BARRA type Industry Factor Model #' exposure.names <- c("GICS.SECTOR") #' # the rest keep the same -#' test.fit2 <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, +#' test.fit2 <- fitFundamentalFactorModel(data=stock,exposure.names=exposure.names, #' datevar = "DATE", returnsvar = "RETURN", #' assetvar = "TICKER", wls = TRUE, #' regression = "classic", Added: pkg/FactorAnalytics/R/pCornishFisher.R =================================================================== --- pkg/FactorAnalytics/R/pCornishFisher.R (rev 0) +++ pkg/FactorAnalytics/R/pCornishFisher.R 2013-08-08 18:11:07 UTC (rev 2743) @@ -0,0 +1,16 @@ +#'@name CornishFisher +#'@aliases CornishFisher +#'@aliases rCornishFisher +#'@aliases dCornishFisher +#'@aliases qCornishFisher +#'@aliases pCornishFisher +#' @export + +pCornishFisher <- +function(q,n,skew, ekurt) { +zq = q +CDF = pnorm(zq) + 1/sqrt(n) *(skew/6 * (1-zq^2))*dnorm(zq) + + 1/n *( (ekurt)/24*(3*zq-zq^3)+ (skew)^2/72*(10*zq^3 - 15*zq -zq^5))*dnorm(zq) +return(CDF) +} + Added: pkg/FactorAnalytics/R/qCornishFisher.R =================================================================== --- pkg/FactorAnalytics/R/qCornishFisher.R (rev 0) +++ pkg/FactorAnalytics/R/qCornishFisher.R 2013-08-08 18:11:07 UTC (rev 2743) @@ -0,0 +1,18 @@ +#'@name CornishFisher +#'@aliases CornishFisher +#'@aliases rCornishFisher +#'@aliases dCornishFisher +#'@aliases qCornishFisher +#'@aliases pCornishFisher +#' @export + +qCornishFisher <- +function(p,n,skew, ekurt) { +zq = qnorm(p) +q.cf = zq + 1/sqrt(n)* (((zq^2 - 1) * skew)/6) + 1/n*((((zq^3 - 3 * zq) * + ekurt)/24) - ((((2 * zq^3) - 5 * zq) * skew^2)/36) ) +return(q.cf) + + +} + Modified: pkg/FactorAnalytics/man/CornishFisher.Rd =================================================================== --- pkg/FactorAnalytics/man/CornishFisher.Rd 2013-08-08 18:10:46 UTC (rev 2742) +++ pkg/FactorAnalytics/man/CornishFisher.Rd 2013-08-08 18:11:07 UTC (rev 2743) @@ -7,6 +7,12 @@ \title{Functions for Cornish-Fisher density, CDF, random number simulation and quantile.} \usage{ + dCornishFisher(x, n, skew, ekurt) + + pCornishFisher(q, n, skew, ekurt) + + qCornishFisher(p, n, skew, ekurt) + rCornishFisher(n, sigma, skew, ekurt, seed = NULL) } \arguments{ Modified: pkg/FactorAnalytics/man/Stock.df.Rd =================================================================== --- pkg/FactorAnalytics/man/Stock.df.Rd 2013-08-08 18:10:46 UTC (rev 2742) +++ pkg/FactorAnalytics/man/Stock.df.Rd 2013-08-08 18:11:07 UTC (rev 2743) @@ -1,6 +1,7 @@ \docType{data} \name{Stock.df} \alias{Stock.df} +\alias{stock} \title{constructed NYSE 447 assets from 1996-01-01 through 2003-12-31.} \description{ constructed NYSE 447 assets from 1996-01-01 through Modified: pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-08 18:10:46 UTC (rev 2742) +++ pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-08 18:11:07 UTC (rev 2743) @@ -106,7 +106,7 @@ data(Stock.df) # there are 447 assets exposure.names <- c("BOOK2MARKET", "LOG.MARKETCAP") -test.fit <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, +test.fit <- fitFundamentalFactorModel(data=stock,exposure.names=exposure.names, datevar = "DATE", returnsvar = "RETURN", assetvar = "TICKER", wls = TRUE, regression = "classic", @@ -127,7 +127,7 @@ # BARRA type Industry Factor Model exposure.names <- c("GICS.SECTOR") # the rest keep the same -test.fit2 <- fitFundamentalFactorModel(data=data,exposure.names=exposure.names, +test.fit2 <- fitFundamentalFactorModel(data=stock,exposure.names=exposure.names, datevar = "DATE", returnsvar = "RETURN", assetvar = "TICKER", wls = TRUE, regression = "classic", From noreply at r-forge.r-project.org Thu Aug 8 20:22:08 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 8 Aug 2013 20:22:08 +0200 (CEST) Subject: [Returnanalytics-commits] r2744 - pkg/FactorAnalytics/R Message-ID: <20130808182208.56BC1184F77@r-forge.r-project.org> Author: chenyian Date: 2013-08-08 20:22:07 +0200 (Thu, 08 Aug 2013) New Revision: 2744 Modified: pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r Log: debug Modified: pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r 2013-08-08 18:11:07 UTC (rev 2743) +++ pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r 2013-08-08 18:22:07 UTC (rev 2744) @@ -201,13 +201,13 @@ w[k] = w[k-1]/decay.factor } w <- w/sum(w) - rollReg <- function(data.z, formula,w) { + rollReg.w <- function(data.z, formula,w) { coef(lm(formula,weights=w, data = as.data.frame(data.z))) } reg.z = zoo(fit.lm$model[-length(fit.lm$model)], as.Date(rownames(fit.lm$model))) factorNames = colnames(fit.lm$model)[c(-1,-length(fit.lm$model))] fit.formula = as.formula(paste(asset.name,"~", paste(factorNames, collapse="+"), sep=" ")) - rollReg.z = rollapply(reg.z, FUN=rollReg, fit.formula,w, width=24, by.column = FALSE, + rollReg.z = rollapply(reg.z, FUN=rollReg.w, fit.formula,w, width=24, by.column = FALSE, align="right") plot(rollReg.z, main=paste("24-month rolling regression estimates:", asset.name, sep=" ")) } From noreply at r-forge.r-project.org Thu Aug 8 21:37:51 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 8 Aug 2013 21:37:51 +0200 (CEST) Subject: [Returnanalytics-commits] r2745 - pkg/FactorAnalytics/inst Message-ID: <20130808193751.F010E185586@r-forge.r-project.org> Author: chenyian Date: 2013-08-08 21:37:51 +0200 (Thu, 08 Aug 2013) New Revision: 2745 Added: pkg/FactorAnalytics/inst/test.Rnw Log: vignette file test. Added: pkg/FactorAnalytics/inst/test.Rnw =================================================================== --- pkg/FactorAnalytics/inst/test.Rnw (rev 0) +++ pkg/FactorAnalytics/inst/test.Rnw 2013-08-08 19:37:51 UTC (rev 2745) @@ -0,0 +1,13 @@ +\documentclass{article} +% \VignetteIndexEntry{test file} +%\VignetteDepends{factorAnalytics} +%\VignetteKeywords{facor model, risk analytics} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\title{test file for vignette} +\author{Yi-An Chen} +\maketitle + +\end{document} \ No newline at end of file From noreply at r-forge.r-project.org Thu Aug 8 22:57:04 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 8 Aug 2013 22:57:04 +0200 (CEST) Subject: [Returnanalytics-commits] r2746 - pkg/FactorAnalytics/inst Message-ID: <20130808205704.A3105180922@r-forge.r-project.org> Author: chenyian Date: 2013-08-08 22:57:04 +0200 (Thu, 08 Aug 2013) New Revision: 2746 Removed: pkg/FactorAnalytics/inst/test.Rnw Modified: pkg/FactorAnalytics/inst/ Log: Property changes on: pkg/FactorAnalytics/inst ___________________________________________________________________ Added: svn:ignore + test.Rnw Deleted: pkg/FactorAnalytics/inst/test.Rnw =================================================================== --- pkg/FactorAnalytics/inst/test.Rnw 2013-08-08 19:37:51 UTC (rev 2745) +++ pkg/FactorAnalytics/inst/test.Rnw 2013-08-08 20:57:04 UTC (rev 2746) @@ -1,13 +0,0 @@ -\documentclass{article} -% \VignetteIndexEntry{test file} -%\VignetteDepends{factorAnalytics} -%\VignetteKeywords{facor model, risk analytics} - -\begin{document} -\SweaveOpts{concordance=TRUE} - -\title{test file for vignette} -\author{Yi-An Chen} -\maketitle - -\end{document} \ No newline at end of file From noreply at r-forge.r-project.org Fri Aug 9 00:33:31 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 9 Aug 2013 00:33:31 +0200 (CEST) Subject: [Returnanalytics-commits] r2747 - in pkg/FactorAnalytics: . vignettes Message-ID: <20130808223331.B19A518588F@r-forge.r-project.org> Author: chenyian Date: 2013-08-09 00:33:31 +0200 (Fri, 09 Aug 2013) New Revision: 2747 Added: pkg/FactorAnalytics/vignettes/ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: add vignettes (incompleted) Added: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw (rev 0) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-08 22:33:31 UTC (rev 2747) @@ -0,0 +1,45 @@ +\documentclass{article} +\usepackage[utf8]{inputenc} +% \VignetteIndexEntry{test file} +% \VignetteKeywords{facor model, risk analytics} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\title{factorAnalytics: fundamental factor model} +\author{Yi-An Chen} +\maketitle + +\section{Introduction} +This vignette aims to help users to learn how to use fundamental factor model in \verb at factorAnalytics@ package. We will walk through users a few examples from scratch. + +\subsection{Fundamental Factor Model} +A factor model is defined as \\ +\begin{equation} \label{fm} + r_t = bf + \epsilon_t\;,t=1 \cdots T +\end{equation} +Where $r_t$ is N x 1, b is N x K and f is K x 1. N is number of variables and K is number of factors. b is usually called factor exposures or factor loadings and f is factor returns. $\epsilon_t$ is serial uncorrelated but may be cross-correlated. The model is useful to fit for examples asset returns. The famous CAPM (Capital Assets Pricing Model) is a one factor model with f equal to market returns. + +In the case of fundamental factor model, we assume we know b, factor exposures which are assets characteristics, like market capitalization or book-to-market ratio. f is unknown and we can use OLS or WLS regression skills to estimate for each period. In specific, +\begin{equation}\label{ffm} +r_t = f_M + b\hat{f_t} + \hat{\epsilon_t}\;,t=1 \cdots T +\end{equation} +$f_M$ is normally called market factor or world factor depending on the context on the country level or global level. Econometrically, it is an intercept term of fundamental factor model. $f_t$ is estimated with cross-sectional in each period t. + +This approach is also called BARRA type approach since it is initially deceloped by BARRA and later on been merged by MSCI. The famous Barra global equity model (GEM3) contains more than 50 factors. + +\section{Example} +We will walk through some examples in this section. First example will use style factors like size and then we industry/country dummies. +\subsection{Loading Data} +Let's look at the arguments of \verb at fitFundamentalFactorModel()@ which will deal with fundamental factor model in \verb at factorAnalytics@. +<>= +library(factorAnalytics) +args(fitFundamentalFactorModel) +@ +\verb at data@ is in class of \verb at data.frame@ and is required to have \emph{assetvar},\emph{returnvar} and \emph{datevar}. One can image data is like panel data setup and need firm variable and time variable. So data has dimension (N x T) and at least 3 colnumes to specify information needed. + + + + + +\end{document} \ No newline at end of file From noreply at r-forge.r-project.org Fri Aug 9 01:07:49 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 9 Aug 2013 01:07:49 +0200 (CEST) Subject: [Returnanalytics-commits] r2748 - pkg/FactorAnalytics/vignettes Message-ID: <20130808230749.DB75E180922@r-forge.r-project.org> Author: chenyian Date: 2013-08-09 01:07:49 +0200 (Fri, 09 Aug 2013) New Revision: 2748 Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-08 22:33:31 UTC (rev 2747) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-08 23:07:49 UTC (rev 2748) @@ -38,7 +38,15 @@ @ \verb at data@ is in class of \verb at data.frame@ and is required to have \emph{assetvar},\emph{returnvar} and \emph{datevar}. One can image data is like panel data setup and need firm variable and time variable. So data has dimension (N x T) and at least 3 colnumes to specify information needed. +We download data from CRSP/Compustat quarterly fundamental and name \verb at equity@ which contains 67 stocks from January 2000 to Decenmber 2013. +<>= +equity <- read.csv(file="equity.csv") +names(equity) +length(unique(equity$datadate)) # number of period t +length(unique(equity$tic)) # number of assets +@ +We want return From noreply at r-forge.r-project.org Fri Aug 9 13:32:24 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 9 Aug 2013 13:32:24 +0200 (CEST) Subject: [Returnanalytics-commits] r2749 - in pkg/Meucci: data demo Message-ID: <20130809113224.D3B8A18504E@r-forge.r-project.org> Author: xavierv Date: 2013-08-09 13:32:24 +0200 (Fri, 09 Aug 2013) New Revision: 2749 Added: pkg/Meucci/data/sectorsTS.Rda pkg/Meucci/demo/S_TimeSeriesIndustries.R pkg/Meucci/demo/S_TimeSeriesVsCrossSectionIndustries.R Log: - added S_TimeseriesIndustries and S_TimeseriesVsCrossSectionIndustries demo scripts from Chapter 3 Added: pkg/Meucci/data/sectorsTS.Rda =================================================================== (Binary files differ) Property changes on: pkg/Meucci/data/sectorsTS.Rda ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/Meucci/demo/S_TimeSeriesIndustries.R =================================================================== --- pkg/Meucci/demo/S_TimeSeriesIndustries.R (rev 0) +++ pkg/Meucci/demo/S_TimeSeriesIndustries.R 2013-08-09 11:32:24 UTC (rev 2749) @@ -0,0 +1,76 @@ +#' This script fits a time-series linear factor computing the industry factors loadings, as described in A. Meucci, +#' "Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_TimeSeriesIndustries.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Loads weekly stock returns X and indices stock returns F +load("../data/securitiesTS.Rda"); +Data_Securities = securitiesTS$data[ , -1 ]; # 1st column is date + +load("../data/sectorsTS.Rda"); +Data_Sectors = sectorsTS$data[ , -(1:2) ]; #1st column is for date, 2nd column is SPX index + +################################################################################################################## +### Estimation +# linear returns for stocks +X = diff( Data_Securities ) / Data_Securities[ -nrow(Data_Securities), ]; + +# linear return for the factors +F = diff(Data_Sectors) / Data_Sectors[ -nrow(Data_Sectors), ]; + +T = dim(X)[1]; +N = dim(X)[2]; +K = dim(F)[ 2 ]; + +# compute sample estimates +E_X = matrix( apply(X, 2, mean ) ); +E_F = matrix( apply(F, 2, mean ) ); + +XF = cbind( X, F ) +SigmaJoint_XF = (dim(XF)[1]-1)/dim(XF)[1] * cov(XF); +Sigma_X = SigmaJoint_XF[ 1:N, 1:N ]; +Sigma_XF = SigmaJoint_XF[ 1:N, (N+1):(N+K) ]; +Sigma_F = SigmaJoint_XF[ (N+1):(N+K), (N+1):(N+K) ]; +Corr_F = cov2cor(Sigma_F); +Corr_F = tril(Corr_F, -1); + +# compute OLS loadings for the linear return model +X_ = X - repmat( t(E_X), T, 1 ); +F_ = F - repmat( t(E_F), T, 1 ); +B = Sigma_XF %*% solve(Sigma_F); +U = X_ - F_ %*% t(B); + +################################################################################################################## +### Residual analysis + +UF = cbind(U,F); +SigmaJoint_UF = ( dim( UF )[1]-1 )/dim( UF )[1] * cov( UF ); +CorrJoint_UF = cov2cor( SigmaJoint_UF ); + +# correlations of residuals with factors is null +Corr_UF = CorrJoint_UF[ 1:N, (N+1):(N+K) ]; +mean_Corr_UF = mean( abs( as.array( Corr_UF )) ); +max_Corr_UF = max( abs( as.array( Corr_UF )) ); + +disp( mean_Corr_UF ); +disp( max_Corr_UF ); + +dev.new(); +hist( Corr_UF, 100); + +# correlations between residuals is not null +Corr_U = tril( CorrJoint_UF[ 1:N, 1:N ], -1 ); +Corr_U = Corr_U[ Corr_U != 0 ]; +mean_Corr_U = mean( abs( Corr_U ) ); +max_Corr_U = max( abs( Corr_U ) ); +disp( mean_Corr_U ); +disp( max_Corr_U ); + +dev.new(); +hist(Corr_U, 100); + Added: pkg/Meucci/demo/S_TimeSeriesVsCrossSectionIndustries.R =================================================================== --- pkg/Meucci/demo/S_TimeSeriesVsCrossSectionIndustries.R (rev 0) +++ pkg/Meucci/demo/S_TimeSeriesVsCrossSectionIndustries.R 2013-08-09 11:32:24 UTC (rev 2749) @@ -0,0 +1,52 @@ + +################################################################################################################## +#' This script computes the correlation between explicit, time-series industry factor returns and implicit, +#' cross-section industry factor returns, as described in A. Meucci, "Risk and Asset Allocation", Springer, 2005, +#' Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_TimeSeriesVsCrossSectionIndustries.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +################################################################################################################## +### Load data +# loads weekly stock returns X and indices stock returns F +load("../data/securitiesTS.Rda"); +Data_Securities = securitiesTS$data[ , -1 ]; # 1st column is date + +load("../data/sectorsTS.Rda"); +Data_Sectors = sectorsTS$data[ , -(1:2) ]; + +load("../data/securitiesIndustryClassification.Rda"); +Securities_IndustryClassification = securitiesIndustryClassification$data; + +################################################################################################################## +# Linear returns for stocks +X = ( Data_Securities[ -1, ] - Data_Securities[ -nrow(Data_Securities), ] ) / Data_Securities[ -nrow(Data_Securities), ]; + +# explicit, time-series industry factor returns +F_ts = (Data_Sectors[ -1, ] - Data_Sectors[ -nrow(Data_Sectors), ] ) / Data_Sectors[ -nrow(Data_Sectors), ] ; +K = ncol(F_ts); + +# implicit, cross-section industry factor returns +Sigma_X = (dim(X)[1]-1)/dim(X)[1] * cov(X); +B = Securities_IndustryClassification; +Phi = diag(1 / diag( Sigma_X ), length(diag( Sigma_X ) ) ); +tmp = t(B) %*% Phi %*% B ; +F_cs = t( diag( diag(tmp) ^( -1 ), dim(tmp) ) %*% t(B) %*% Phi %*% t(X)); + + +################################################################################################################## +### Correlation analysis +Corr_cs_ts = matrix( 0, K, 1 ); +for( k in 1 : K ) +{ + C = cor( cbind( F_cs[ , k], F_ts[ , k ] )); + Corr_cs_ts[ k ] = C[ 1, 2 ]; +} + +# time series factors are highly correlated with their cross-sectional counterparts +dev.new(); +hist( Corr_cs_ts, seq( 0, 1, 0.1), xlim = c(-0.2, 0.2)); + From noreply at r-forge.r-project.org Fri Aug 9 15:09:18 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 9 Aug 2013 15:09:18 +0200 (CEST) Subject: [Returnanalytics-commits] r2750 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: Week1/Code Week1/Vignette Week5/Code Week6-7/Code Week6-7/Code/Equivalent Matlab Code Message-ID: <20130809130918.747F5184678@r-forge.r-project.org> Author: shubhanm Date: 2013-08-09 15:09:18 +0200 (Fri, 09 Aug 2013) New Revision: 2750 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/NWmissings.m pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/regstats2.m pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/sim_NWmissings.m pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Tests/ Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/LoSharpeRatio.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite-Graph1.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite-Graph10.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite.log pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDrawdown.R Log: Tests for week 6-7 codes , roxygen code readability Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R 2013-08-09 11:32:24 UTC (rev 2749) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R 2013-08-09 13:09:18 UTC (rev 2750) @@ -1,16 +1,18 @@ #'@title Getmansky Lo Markov Smoothing Index Parameter #'@description -#A useful summary statistic for measuring the concentration of weights is -# a sum of square of Moving Average lag coefficient. -# This measure is well known in the industrial organization literature as the -# Herfindahl index, a measure of the concentration of firms in a given industry. -# The index is maximized when one coefficient is 1 and the rest are 0, in which case x ? 1: In the context of -#smoothed returns, a lower value of x implies more smoothing, and the upper bound -#of 1 implies no smoothing, hence x is reffered as a ''smoothingindex' '. +#'A useful summary statistic for measuring the concentration of weights is +#' a sum of square of Moving Average lag coefficient. +#' This measure is well known in the industrial organization literature as the +#' Herfindahl index, a measure of the concentration of firms in a given industry. +#' The index is maximized when one coefficient is 1 and the rest are 0, in which case x ? 1: In the context of +#'smoothed returns, a lower value of x implies more smoothing, and the upper bound +#'of 1 implies no smoothing, hence x is reffered as a ''smoothingindex' '. #' +#' \deqn{ R_t = {\mu} + {\beta}{{\delta}}_t+ \xi_t} #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns #' @author R +#' @aliases Return.Geltner #' @references "An econometric model of serial correlation and illiquidity in #' hedge fund returns" Mila Getmansky1, Andrew W. Lo*, Igor Makarov #' @@ -18,7 +20,7 @@ #' @examples #' #' data(edhec) -#' head(GLMSmoothIndex(edhec) +#' head(GLMSmoothIndex(edhec)) #' #' @export GLMSmoothIndex<- Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/LoSharpeRatio.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/LoSharpeRatio.R 2013-08-09 11:32:24 UTC (rev 2749) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/LoSharpeRatio.R 2013-08-09 13:09:18 UTC (rev 2750) @@ -1,3 +1,30 @@ +#' Lo Sharpe Ratio +#' +#' The building blocks of the Sharpe ratio-expected returns and volatilities- +#' are unknown quantities that must be estimated statistically and are, +#' therefore, subject to estimation error.In an illustrative +#' empirical example of mutual funds and hedge funds, Andrew Lo finds that the annual Sharpe ratio for a hedge fund can be overstated by as much as 65 percent +#' because of the presence of serial correlation in monthly returns, and once +#' this serial correlation is properly taken into account, the rankings of hedge +#' funds based on Sharpe ratios can change dramatically. +#' +#' +#' +#' @aliases LoSharpeRatio +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @param \dots any other passthru parameters +#' @author Peter Carl +#' @references \emph{The statistics of Sharpe Ratio +#' }, Andrew Lo +#' +#' @keywords ts multivariate distribution models +#' @examples +#' library(PerformanceAnalytics) +#' data(edhec) +#' LoSharpe(edhec) +#' @export + LoSharpeRatio<- function(R = NULL,Rf=0.,q = 0., ...) { @@ -67,4 +94,17 @@ return(results) } } -} \ No newline at end of file +} + + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: LoSharpeRatio.R 2271 2012-09-02 01:56:23Z braverock $ +# +############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite-Graph1.pdf =================================================================== (Binary files differ) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite-Graph10.pdf =================================================================== (Binary files differ) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite.log =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite.log 2013-08-09 11:32:24 UTC (rev 2749) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite.log 2013-08-09 13:09:18 UTC (rev 2750) @@ -1,4 +1,4 @@ -This is pdfTeX, Version 3.1415926-2.4-1.40.13 (MiKTeX 2.9) (preloaded format=pdflatex 2013.7.14) 30 JUL 2013 09:40 +This is pdfTeX, Version 3.1415926-2.4-1.40.13 (MiKTeX 2.9) (preloaded format=pdflatex 2013.7.14) 3 AUG 2013 06:57 entering extended mode **OkunevWhite.tex @@ -366,7 +366,7 @@ cm/cmsy10.pfb> -Output written on OkunevWhite.pdf (5 pages, 174408 bytes). +Output written on OkunevWhite.pdf (5 pages, 174471 bytes). PDF statistics: 85 PDF objects out of 1000 (max. 8388607) 0 named destinations out of 1000 (max. 500000) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite.pdf =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite.pdf 2013-08-09 11:32:24 UTC (rev 2749) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Vignette/OkunevWhite.pdf 2013-08-09 13:09:18 UTC (rev 2750) @@ -8,27 +8,27 @@ endstream endobj 4 0 obj << -/Length 2106 +/Length 2102 /Filter /FlateDecode >> stream -x???n???}?B??F ??!? I??H $@?\????L??????\??Dm?"}?9?9??w????s??Y??W?;X????Up&??????????&y|??O?k??????$??r??@???3~???,??[olH9{\????????????`??j?\m\???b???q?'?????}??o??u=/~A????y??g]lq??PE,'?? ?9r???U??P???i???/??I>1???K^[Xf?8)????*? ???????4Sl?~P?P?D??!>?1y??"F??:?G???c??]?%?????b?S???%1 ?????AO??(m???B??_zQ?????#J????}??}?7??? -*?}E.???????o{9G???m?;2??????K??e'No?F???l?%lnZ>uk[$o???k???6?" ???Zt????I???? u?m?E?VN?Bv'?4I -???Ckt?s?{>N&;?$?t5J?Y -+?K,s????2?????AaN:?z?0H6O%???B?=n5 ?V??uR'Z???q?`?4?z???p???R?Mc?LX??A?qrd??4cV?r?8?Q[?????g?L?B?)?? ?&?SV?w???27?p?\?+?U6?V+???g???e?s?? -%_?{?????.????????z????#?D??5??s -y?T???R??D???Hcz???1t?Z^??.h? ??s?(u?B????[?w)F?^?C ???a????jH6?\??o5j? `?S/???m?S?m?Z?hOF1b&?1?&*(?bw?1????3?*ZZ?????????2& +x???n???}?B?????C?A??)P H?H?n?"ei-?E??????C???E? s8 p??;Y?}????4r?????_g??????"Y?l???-?[?;W,-'(???in????=?????.?f?<-?????_&? ???? +???/????\??%????@ +?G?????~?_8???N??z??O???!Z~?'pP??|bc?x ????[? + _(????HX??HX??y ?[ork?v7T?V?o?,P??N?aH????m?????Ph?? B?0?-?i???/?????Z?&?^???c???t??p1??R???????y??*k$?I?/Y6???????9?KZ???v??3???????T??j2?pH????T?c?-???????? ??4/???nM???:??? ?R/???5???p3?}?MK?!?1??26????o)`?v?/??'????#?ny"??r?|j?:^|?YAV?FX? +?j?a~??ii5]?2??ez???x??u???X???0'?~?^$???Ks!???}+L?:??]??8N0fn???V8my\????s&,[? ??8 +92?~2+s9ho??-?^t?i?3q?}? ??O?F??????x\j???Z?i?q???*i???????3 +?A[?2?9Dw??/????~O?`?y?h??p??[s?n?B???9??f???u)?B?? ?C?1?Ko?:Z-/oB??????9P??k?F wb?-???a??"?????????p5$?M ?k???5q? 0??????6?)??t-S???'?? +1??Ka???P??b??S-??????v??W?i? endstream endobj 3 0 obj << @@ -197,8 +197,8 @@ endobj 29 0 obj << -/CreationDate (D:20130730094017) -/ModDate (D:20130730094017) +/CreationDate (D:20130803065727) +/ModDate (D:20130803065727) /Title (R Graphics Output) /Producer (R 3.0.1) /Creator (R) @@ -345,8 +345,8 @@ endobj 39 0 obj << -/CreationDate (D:20130730094018) -/ModDate (D:20130730094018) +/CreationDate (D:20130803065727) +/ModDate (D:20130803065727) /Title (R Graphics Output) /Producer (R 3.0.1) /Creator (R) @@ -1050,156 +1050,148 @@ >> endobj 70 0 obj << /Length1 2269 -/Length2 16076 +/Length2 16145 /Length3 0 -/Length 17423 +/Length 17490 /Filter /FlateDecode >> stream -x???P\????w???????????????=\????.??s2???????^????&'VP?2?1??X;?330?Dd?YLL? LL,????f???????@{3k?????e????v?6?i'K3+???????? ?????C{?????@? mc -t?%??u?731u|???*Cj377'?_?!+?????5 at V??h??P??dchtt? -*^SGG[FF}+{~j:????)@??w~ ????],9@????o???????=?.?43Z;?{8Y???JR2y[????2??? -?????t?x?&2???Y?????V?????`lf ???08?:????~?[:????;??Y?????>@\?3@????s0?7?ut`p0??]"?o??.?Y??XY?`?'jf4|o???????q????Y?.????Q???? (%????????`gbb??f?@WCS????n????????xy??????z??`=???G{'??????`??Ff?????5??w1??o?>|{3W?&???1?~??O?}??l?-????5_FU%aU99??+??NX???A???gagpsq8??^??????S?_R}?rc?C(eml???????? ????????o9??U??l?;??????????????f??????????_j?????j}+3K? ?7????*dm?o???????dY???????J9??_??????h? n? -4R0s44?{??3?wzK3k??????g -?????????????y??>??T??s???b??6F??????oo?????^,????5????Fk?w?{y^c{????0????????E?LF????(??%? V???`???????E\?? -?;????????(?A?y??q?#???=k???=????????????????XY????wF???J?z??9?o???????wBcc?????dg? -??????8???????_????O??????h?/?w??2q????c?/???'????????1???m??~?}??K?????????;??h??G?????????=I??????i???m-????.?????? ???y?G???[h?~??C??{ ??)?{??M?????s???????????b?/??&:? ?7??_?= ?M????_????_??i??~gr???n?????k?=m?????*? -4?]^?1?h^?q_+??B???7K???JM??l?????D]???i+?4????+Fu#?B??q??????????n????W??)??????|ze??;OU? ?6?.i?\;'.D?|?{?~ ??????????5????g??T"??J??? ???I?? >???rE????E??|#?????:?b-???b?~?w_?Tfq??!???&?A???>J??Z?(-??K?????Jd???@z?????y??VB|???6q???:???;??tR\O????R~?;??s?a????McD???b7 ???ni?????OI??+f???{?\?????????? -j????XN?c0F? ??7???9?C>????o?????9j^V?il? ?? -0?>?,B;???.r??]^J?\?+9Q????l?L??F%?W????xL???'?,???k"??U????3??YS_R^f)Ry?}???????w&N?}??1 -??Wi??????~=!?X)TP?R>??P?px??/?"?a?M>#CG?A??\?VLj}????B? ?PM/??r???<3??5+????t????+?9?3? -?_??:???7??s?%7???'T?>?>?h&?4 ?????????F??????3C?&%?n??B???:{-?zP17???? ?fI????!]k?yz -?&???'M????k???-U1????)C??h??6?=c?o?????%z?!???X[dx???????m?JgG6?PgO?V?h??@}?DO'P ??{?t;??f???M?????r?z?F1???^??O?L??QV???4?(?&?J3V@?M?fB??-???_?Q?G???????eq??r?4? ???*z?r?g?c??d??@x?????dUtN??????????r??4W?Q????[???s}??*?o?????G?#??s>? 3????0,??MS9??D[?*??;? -????C??V??+?PQ?(:?a+1????Ru -???? ??}?q==???8??GS?Sb8???p???B|?SYRp_????K???R9???@0~?Q??w0???????gL)m+gvi]????x? D??H,???????4?4?????P??]?.???F=o?? sctL?2,/?t????D,ut???V??????7?o0?u__??s['?\?n|?Jyr????='??'?e?%8?{$?V -????????s???s#7?e?P???j_P"?"???? =????[u?q??W??8'S?kz4???T?e??}H??K??=TbU?B?????g#?NE8?? ???&??\(??=??\,??Q#4??>?x??G????r?d6/Sv`???\?}?d?T?| 4?V??$?e~Qpl?[????RW}?Qy?????_?Z???Z4??}???,%??q_\\dh?-{??g?2I? *???k?,?\eg?_R?HhX??|d2^?@?? -?4??????;'?bx2?:?>????n?NH??}oe???/$??`Lo]r\??M?T6hiB12 ,M?B:J6`7B????a?E?SQ6}?Y?t?????K -?0?3Ff+??>xB?L;F?* -3?????/o?7:?h(??>?????_?[??^?.D5 at y?,?G -?[??8?7j- f??o???U??????.?2?7?O?y5?MF ?:????Lb??t?/?h8??f??@?/??t??E???o??m?.F?H?l???h(??&??!??T????2?[?}?k????w??F????]??}R??!???Q?r]_?l?????T? ->???F???F?1bT?T????/%@???bv}J?????R?N????h6D?F\?G?YMJ?????Z&????u???A???!????r??? q? ???d?f? ?^??5U??rt??? -?]?n?i? - ?.???r??:?!?$0?W?X?.???*M2#Q? ??3}??-[? f?? ?b?.J`?????? ????M?,&?? -??9J??V? -? -?S??.W??p?}?H6????]07?~e ????D?#/B?)'T????e???SM?u|? -?????m%[????g?? ?K.?zA*Zt401>5??"?????/{l??D? ?Y?k?;nYL?e?J??????U?i?d???x'~??k?R&???*?H5???B9?VIcJ??!??>s?{?[W#|??j?nM??rI[???2???.???At?/=???w2?U??????????????O?????(??o??5??s???B<P?)?3;7??9???}?wx[??d??>???d? ????L???Wt?`=??K?O???>?7? ?Y??????? ????AE%?Y~????j)R+??`?0??Q???"??o????*?6?E/?9?( -$??????JU@@?x[0?9%RnG??B??6???o???4??kW?)?x>???? -?Y????Q? ???Ip.?CI%?Gv2}6?al??? "??=?V????{??@????|v ?y???,Q[?????B?U0zJ?Z???u 3.r[X/"uq?nh?N???T\??r? -g?MF),B8??G L???K?O??0 QJ.???^?uL??O??n??? |K3?X?n??s?A7!??R -'U????e4Q??]e2?i-?Q4?&7?K?V???"\??Ch??????1:(c??%??f??-,?I??e*8? m?Z0???{?LV4??lHO] R?? \l?.3Z?>H?8? K"Q??6???PU+??$??1L?????????m<+?'??7s??N??=????~?;'???k??Z?? ?P?B,?Ki?\j???n+?0r1m?????B?;y??# 9A?MD^?????-??o?eWj??????????? O?yXd#?,f?ie?Y???z??"?C???G?~??????4?M???xLm?wy?Z???P6?D?W?WV???etE??WP? ??????+??[&?hs?u?f?(}?{?W??J??ifQ????l?r??U<?#?K??????9???v+?i???z?c?v?aF??y??????}?d???r?>z???x_?????5?,/?l:?????Lp?,??N]ZZ{i}#}l??V?X???~q"B????k#4`?V!????7?W??r? ??Y+?&i)b?A?????n#?e??????O>+s3c?+$???4/_k+{?O;1??o???+???&?_Y?????Xd??????h??8??.R?.?&????p@??L???G???????`???????_??Qt??3F ;N??71ZO?t -1V0???-?????%F??s???Z??&??????cT???????B??????(?B+`tB?? "?#v?y???71?!'?/?G??{Hd??Y?I????!??L;???G?Y~Z??Z??n????Y?S??g??D??70??z?(?Ia?q?q?E%??e7????0Fd(?Qw.?@??????T? z ???? ?z??>??G??1????$k??U?b??/??????|-??r0?K??????=???axV??j?\w?d?W-?H?$ -?E??????|?u~??R:?0? Rgx?V??vw???fq??F??2?~?S?qL?e????YX1?(?F[?#{???+??????|{ qG?;cYjMx??\?qV"*K-[>;???fiV?kl?????7z??f#>???N.?a?????9\-p?NJV????HK]?+lJvJ??CGP?H?%Lx?0??g?????????i?? -?.????>???r?[???zF????a?;???*????E????&????&??0D?"?LR?????.c?:??e=???? -Y??-?A???7?7???NI???fv Ov????L?b^?x???O)8}?-??L?Xs??U>q?FwN?J???j??l?????O?;[\3P?g>=?z?J??e?m+???l???? ??u/????$Q???#6?9o?k??+?j9?c????;?1??O??1?:A???/?k?L?%\?:x?Q?) ?`????;9??c?t?$l2???k:?a????Q?j&2gE|?V???og? -!???|??????P??t?&W??d2??R??t|H?r? -a?m]U???5???j?[?N??~>?g -??e?hI?.???????s?95? ??C???Q??V?3 -,??%???P?p|*?M??E?K?E??????(?b??@???{ ?5?????E\?Z^?4?????W?Q?????a???Zb?W? -.???? -_t)?????QW-?tL>??j2?C ??V; -?iJ??T?bK?@???$x?$s???:???kQ?-?^??I?l)?'?-8o?????Gn(????d????Q?.y?u S -??]46_K???2fMP??c??k?8U?]???J}?HF??g??3??I????;y??].????*]L???5>'???????m1w?}i?????7 r?(?K8?^????>w1?6?)!?&1?2?>q?|g???_?%;o???w????L?$?|$?D7???????x?V?%?]? ?JU?*;d??p$s?qr??m????H.S?c????????@?nd?<)(?|????;=??-?O?\??D??]???g+?~?????8M???3?I??9?\H??"?????';?6b???vs8&Q#H?P???$ ??????o?5b??&??? q?]Qts?-6??? q??u????????!A??Js??I~?}% !????tF?tmf?yt:$MT?F??????????e&???a?\??-?R??a-k?vu= M?R1???64?A5?!? -???d)?[????z?3?V^???&?EF??M? -??b\h=??$????L]FE??b|8???????8p?????R??mC??g?#????q:???{?a????????????r???;?6=m&g?? ??L??@?bYu???A??`q??F???k??Hk?????j?`S??T=N?Z????? ?m?? ??LM?? myO??,??????L9?? ??v??~+?T?X?f?xn??YL|a ?????/?;??,-??1RcWPARU/?+B?????V?\d -;N?-??*????????(????@??z6\Q??SV??????&:?b>4w>?$???L??????-%Y'??FN?*na>0?A?}????????????n?98V???" 0g ?E???Hz??(?V?~>? ???g.]???i?I?^ ??E?????&?????cs??? ?????Y?????;???????J]????: \????}G??o???H ?xR??b??G?rq????5?????PF??Gjz5??????3K??d????????!,?MI???;\ ?T?b[??????v?????U??;N???B??pp??a???^E?;?~(?B?&*A??? ?Z4??! -?U?*yZC^I?n???nt????C/'(?E?9????A4?M????b?TS?q#{?$!?t?V???F??I ??4??KFe????P?A???k??????)q????g ?eH????h?J???N?K????S0??V[9?],?w????-?B???l4?o?????+?&c?Fe??Z?Sp??7 -????UC?5??V?=???7??L?? -c?N? ?Y?EM??CF???????Qc??j?s??Cnk=k_pfo?bf??,tDK3Y,(6??@?????a?Z:^{2???t????y??4?Q?9???(tE^??[??y??"z?}?m??^#???????G# I????W???-/#?#9?*?T-f?=? ?6N!Y?M?3+Z?ox6A\?B?????H???P?D_~??eC?R??S??9?G??e:B?"??*,e????? -??u?w.????_I??%??? -D?Y???;xaU{?????}??y?? *+?:????d????????!V?*? m?3Y]g?????>H?D???x?z????^?o???k??v??`H?a??LS?J?????????Dc|]?s???b? -??Fc??7.?cpw|K??????Q??4?UNk @sfs')?|???? -? ??2I??Ur??????0???)????/wp?Nu??+???????_???1H????,??k???$???dp??jN?M9???- ?~5???y?W/?"A?}Z_?lh?&Ta?o?? ??#sk?y3JFL?v??N?~?r)??????a?????Y?????^y?lBnt???=??? -S????? -f ??A?? ?4???0cI?3?(Z???;t+& -YA _?3?????*???R?????o%Pl???????N?c?C??Q at D??9?Jo???n+8?[?? ????15t?2??2????NB?`?b?l??i?b?C?'?Bn(h?!?????j????z?? -?4d???P?K??u??17?>????2(?V???qs5???+?????miD?!G?('? !a?Y2 5"?4???XoB9???}?W?$I????u???jT?i???=???f??0q?????????7ea?'?Ht?p??(?x???4:??2??!????????8?-6?LF??K?;???qV';:?????N?M????G?c4?K4~E"??|?kFpN#????*o????GM?K?????!???D?_x?V??9Z,M ??FmK?KW??????D??K? ??@LZ????)??f::Q?3???????D??W??:a????%????d??~eL23Tq??Ur?;=?K|r?4^?n[??I????C??v????E.k?usv"[k$?:?Z????????(?????>??oB??b?? /?????????=nU????2?8G2yLZ???|6g?X?<3?????t?rx??*?O??*??????s:??"??k ??q?lK??o??nx? ???|???+???(?{?;???l?s??????9B???TNw?Y:??G? ?O??)c???alSt?~xLr????r?e??NM2?W??f??mcf_h?3,?????\? ??~~??9kqNNj>??a?? ??t?y#?W;???T&C??????????Eb+?6l?>?N????r??k?>??< ?e A??M?7Mrm???[?%Q^??/????*????%>??o7?RO?x;"?-?M????T@???????5'???n&G??P^??P?\???u?""+ ??\se\?E?N3$lo?????7O??X?{>?enP-mC??-WV?Q??&???>1?Z????????a????%???p???{h?8N4??7??????m?z???P??????S"?$?q?????-?4n????@?n|_7B??[?7I -j???d?/ S??Q>??`M?T b????????X?? ??0????b3??? -???qH?48/??s?$U?????dXjQ???_??D2,uD#{?37?z$?Ns???j"???-[?}@? V??g????z?B???]_??J? g??#?;?????Z??[HW?#?t???d0????AL?<+Kb?oa??\?X??i?@?E?d?. ???????l {7``??????tOz???t?????2??Y{?*%???> -???i 0??H ?? -??QRU???v5?d?????B?_5?" a??????3 ?`?Z??0cc"HC?f?R1?=1\??? ?9 -&?4???????????> ?\?4?#mts?gZ?S???6?}q?AOT??:??U???7?YO?xz"??Y?)5???G?????(??????ZHA?Mfr_?%/?twX*??\.?XI?????r??\Lf?q??O?Z??V? -???B>??3?5]??H?? -?u?(6k?xs??E???????7[?.$}c? "? H?k??????K7=6?S7? ;?_K?{?k?E`???g?O?T?maJ??p^?? -?hv@?+???d?f?e?OW -??hf?5?4??x?m?^!?'?2d?K??T??????"????A?+?e~c??{?m(?q?????,?o?-?ft????z??Cy?~???{??0??K9???Q-e\T. -~]o??P???|?r@?F|??a?u?=?q??? -?S????? -T?,????B?D???."?????M=|`.k?e[?Y??BI]G? -b`?z???'?`l5?OB??????-?@&?0b???%?nO?r1KJt??????????b?(?? -?BB?? -????A?2z?j?6?e?9&(O??n??????????6???EO?>??M??$k???? -2??DG:b?c???k????n?h?? ?_??:+#??0???B?]?H_>W -bv[????&?kQ?a?%P???????BI???l?????an?????/k1?1|???!vV^K -??=Rau@???M~"?8???&???W???v?n ?,g????}?b??7?>??????.S?F#?@??[<9kSv?i?G???,3??!?4 -.M??& ???D????>??????c?Dl????0l???3???mP ???-??s?{XG>%+;?c4?`??m?R?pF?}?|???w???? KT?`?G??|? ??? -M]3?9eM&????.8k?(L??W?E???? -?i -_?y?H/&RvRXD??d?????Y??+??? -?r????w?!F?I?0????K??j8\?"{ ?,?g???O????;Z?z??A" %??H^??G????s???Z?@????F4??5?(w?x??y???K???&]??R??????q -WF?UZ????Y:?I(F?L9??e*?\=[??I????D?:f?}????YF%??1/H6??xuI!????/??????)??s|+???5??a??qDS?TXZ??c#-8???RL?PRL????uY?id??[!>8?i1dpj??:??Ej???bx?? -? ()u?MZy?j??,C4z?Q??d+?&???!D????s????\pO?M?}?????l??;?g?'"????f?>%~??G?m??lr;????@t.A??KcTs??H????J?t0???b?? -qG&?????m?v??t1?[??81[??B?h6??Oe?? -?????NG? -G -????*?R?????0M???>?M?RB??OX???|??1U]????:?6lgd??g?N??>?g?#W?)????? -G??*SGv??/?SIA( -%? -.)kr??8????X???]+?*pp;T??EK???)?Y??v?c?1E~??m??a:?eh}.?t????z???&?a?&Ef`??K?????????3^?????b??L?+??S?? -?T?D ??[????w/?DmBR????;0$Az?u)X???????U? -??)Co?>??_??~?/"3'??w????_t??"C ?*?+????]?Bl.????????8??0?p?D6??>???o?|f?z1K????@q????*????l1j?yO???l?C[?6U???a?A?b?h??;a?+???fW???B???r,@?>" X??*?????1[t??c?|z1Q/^????~:???G?m?3????7?yn!??"^[iR??q? ?)?o ????.???"{??pjOI+?M?uq'???Q?p??????U|9-?m?"?ae2?????"?NX? -H?B?Ir????=???%?? ?^O?6?d?Y?????&???NB???o??oi? $????s??U?*?$?Jg?*?L????????`??%K??????T????/?Y7P?6???|P????l?!F=Z/??.xoT???t??????????P????b/??o9??-`uLl?%??Q{?+?<8_PRL?R??0?c?DEo?J'?}%I?b???.??)s??b?B??h??h?U???????W?}M?Y?E?b??RGl??F?WG?^?o1????7d?1??????d$A????c?]?p??F2??9,3)?=?~?Kk????4?a??`????4???!P???5%O?? +?.?????/?,%;3???A ~?P?v??&???m??!|?d?L??M??2?;?V:.?? ,l{? +Kjz?W|-l[6}??0??E ?-??g?v0l?kf?4???Z??W*????O?!?V?#|?^:#J????`X?o????XT5j@???x?to?_?1[?/?|??o???q6????b?1?<0u??1??G*????????< +??? b?YW9?l?"??m?m~?.?K?K?????{act.J6?T??}????yQ"??\?3c?jT?? +1M??HS???/?9K2~2E?}?????a?{ ????3,???kt?P?oI??>#??)??xR=??Q?r?9?/Z"G`?O??@?XA???$?/??kS???F?$?&??Y2??9??<7?w??2??'?8?k?WH9?+??_??&???/?N?? +??s{Y? ???{?J<l??X???r?9???d?lC_?8cQh(????W????????2#??[?L?c????NYH?j?????? m???)??}?????????O??$e?Ta?B?B???[?? w3?7R=?]??m??6Y???g????w?}?????u"y4????a???Pm???}? ?.bp8n1?6????7_?S1?????R?2|?*??FO?q?y??/$?&Y? V???W!(?S??9S???C???K??J? ????#F?Z?W?S??T?'??b1???T + .?????[Y? +SD8??d?k??)??/E?W|????????L'?? +??FJl??m'q?ZfrB????l5L???|wW????mW????jF????T?C??&@?n wX?u#???lHX8^????i??G%? ???>w4M4-??Z?:;?8?5 +?W?1F??W +??ta.vx?7?v?D?vF?s??Q??%?c??.?g????_h?P?8??"?8!oZ?Ac\????xZ??K?e????m 8Q~??%r??????x?>???????j#??p???mV?[????&?H?H?$S ?-%L??\~?6?j??2?!i???&?4?Ui?6???9??}??%zI? ????`p??Hc?}?$? U?i]?JH?x???O? ?v`?jS65?0??n!5?????Mn??FBx????7Rl???|??f?P?????x-??????o??9w???(???n?Tn????gA?Y?????^??@~&?*?60?? ?????c?`??7?????!?"D???k/?Hc??$|?`??QN??????J{v???????Z?v?q? +???s??????&??qO=??c???-???Z?J?Y? +??????B?????m??!'?5U*??4??? +?B#uU?~n2'??2?n?D?Y6^H ?H??z?8??k???K??L?]??t?%#??? BXo??U??rWe ~??g???x??}:RWx??XI???&?J?? N?h?tG:?; k[?|g"KW??I!6^?b???/J|? ?;'? ???6IF?9$??m?Lf?'??*!?~tm?I:????j?N?|?Y????^h'd???S5??xA7?5?j{)? ?a8G???a???y??r??y???I +?+?UP??=?|uH ?;=$???????4Y??e????3?T)?????????????zyu?-?\?rJVd(#??]??+?{?\?->4???k?\???EZ`a??????g~I??I? +?D??'????2?Tv?K`???B?s?t?L1?b4'??b??)T??:??,???P??U2Oc?QE?\?}?6?&??&?N/?-???j??u?%?F??GL???)??$q?zC????J'????LD??_Q?,&??(??<f']?????"?LO?G[?>?h?mP???>8??????)?G?2?%?W?B$??5G ?ZP??K?*??RM8??gh%`9????Y>?n??w?????>????;?ee????S7???.??*=kD%*:~r53j]??a????????????:N g????A??FO???p???6vB-???~ Dear All, I have a problem with Return.rebalancing() from PerformanceAnalytics after upgrading to the latest stable release v1.1.0. The below simple example was working wth previous versions but now returns the following error message: > testRet <- xts(cbind(c(0.10,-0.08,0.15), c(0.02,0.01,-0.01)),+ order.by=as.Date(c("2013-07-19","2013-07-26","2013-08-02")) )> names(testRet) <- c("equity", "bond")> > testWeights <- xts(cbind(c(0.2,0.3), c(0.8,0.7)),+ order.by=as.Date(c("2013-07-19","2013-07-26")) )> names(testWeights) <- c("equity", "bond")> > Return.rebalancing(testRet,testWeights)Error in `[.xts`(result, 2:length(result)) : subscript out of bounds Can you advise? Best, Paolo -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at braverock.com Fri Aug 9 16:47:53 2013 From: brian at braverock.com (Brian G. Peterson) Date: Fri, 09 Aug 2013 09:47:53 -0500 Subject: [Returnanalytics-commits] error with Return.rebalancing after upgrading to latest stable PerformanceAnalytics v1.1.0 In-Reply-To: References: Message-ID: <52050119.5090704@braverock.com> On 08/09/2013 09:14 AM, Paolo Cavatore wrote: >>Return.rebalancing(testRet,testWeights) > Error in `[.xts`(result, 2:length(result)) : subscript out of bounds > > Can you advise? Your example works for me with current code. I suggest updating xts. xts was updated on CRAN recently to solve a likely related problem to the one you report. In the future, please email the maintainer(s) directly, and don't email the SVN commits list. This list is for monitoring the source commit system, not discussing things. Regards, Brian Ref: require(PerformanceAnalytics) testRet <- xts(cbind(c(0.10,-0.08,0.15), c(0.02,0.01,-0.01)), order.by=as.Date(c("2013-07-19","2013-07-26","2013-08-02")) ) names(testRet) <- c("equity", "bond") testWeights <- xts(cbind(c(0.2,0.3), c(0.8,0.7)), order.by=as.Date(c("2013-07-19","2013-07-26")) ) names(testWeights) <- c("equity", "bond") Return.rebalancing(testRet,testWeights) sessionInfo() #################### > Return.rebalancing(testRet,testWeights) portfolio.returns 2013-07-26 -0.008 2013-08-02 0.038 > sessionInfo() R version 3.0.1 (2013-05-16) Platform: x86_64-pc-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=C LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] PerformanceAnalytics_1.1.0 xts_0.9-5 [3] zoo_1.7-10 loaded via a namespace (and not attached): [1] grid_3.0.1 lattice_0.20-15 ####################### -- Brian G. Peterson http://braverock.com/brian/ Ph: 773-459-4973 IM: bgpbraverock From noreply at r-forge.r-project.org Fri Aug 9 17:36:39 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 9 Aug 2013 17:36:39 +0200 (CEST) Subject: [Returnanalytics-commits] r2751 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130809153640.03A56184DD4@r-forge.r-project.org> Author: pulkit Date: 2013-08-09 17:36:39 +0200 (Fri, 09 Aug 2013) New Revision: 2751 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R Log: Multipath CDaR Drawdown Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-09 13:09:18 UTC (rev 2750) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-09 15:36:39 UTC (rev 2751) @@ -37,12 +37,18 @@ CdarMultiPath<-function (R,ps,sample,instr, geometric = TRUE,p = 0.95, ...) { + #p = .setalphaprob(p) R = na.omit(R) nr = nrow(R) - # checking if nr*p is an integer + # ERROR HANDLING and TESTING + #if(sample == instr){ + + #} + multicdar<-function(x){ + # checking if nr*p is an integer if((p*nr) %% 1 == 0){ drawdowns = as.matrix(Drawdowns(x)) drawdowns = drawdowns(order(drawdowns),decreasing = TRUE) @@ -119,7 +125,8 @@ } } R = checkData(R, method = "matrix") - result = matrix(nrow = 1, ncol = ncol(R)) + result = matrix(nrow = 1, ncol = ncol(R)/sample) + for (i in 1:(ncol(R)/sample)) { ret<-NULL for(j in 1:sample){ @@ -127,8 +134,8 @@ } result[i] <- multicdar(ret) } - dim(result) = c(1, NCOL(R)) - colnames(result) = colnames(R) + dim(result) = c(1, NCOL(R)/sample) + colnames(result) = colnames(R)[1:ncol(R)/sample] rownames(result) = paste("Conditional Drawdown ", p * 100, "%", sep = "") return(result) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R 2013-08-09 13:09:18 UTC (rev 2750) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R 2013-08-09 15:36:39 UTC (rev 2751) @@ -18,7 +18,7 @@ f.obj = c(rep(0,nr),rep(((1/(1-p))*(1/nr)),nr),1) # k varies from 1:nr - # constraint : zk -uk +y >= 0 + # constraint : -uk +zk +y >= 0 f.con = cbind(-diag(nr),diag(nr),1) f.dir = c(rep(">=",nr)) f.rhs = c(rep(0,nr)) @@ -41,6 +41,7 @@ f.rhs = c(f.rhs,rep(0,nr)) val = lp("min",f.obj,f.con,f.dir,f.rhs) + val_disp = lp("min",f.obj,f.con,f.dir,f.rhs,compute.sens = TRUE ) result = -val$objval } if (invert) Added: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R 2013-08-09 15:36:39 UTC (rev 2751) @@ -0,0 +1,159 @@ +#'@title +#'Drawdown Beta for Multiple path +#' +#'@description +#'The drawdown beta is formulated as follows +#' +#'\deqn{\beta_DD^i = \frac{{\sum_{s=1}^S}{\sum_{t=1}^T}p_s{q_t^\asterisk}{(w_{s,k^{\asterisk}(s,t)^i}-w_{st}^i)}}{D_{\alpha}(w^M)}} +#' here \eqn{\beta_DD} is the drawdown beta of the instrument for multiple sample path. +#'\eqn{k^{\asterisk}(s,t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_{sk}^p(x^\asterisk)} +#' +#'The numerator in \eqn{\beta_DD} is the average rate of return of the +#'instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest +#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{\asterisk}(t)} +#'is the cumulative rate of return of the instrument from the optimal portfolio +#' peak time \eqn{k^\asterisk(t)} to time t. +#' +#'The difference in CDaR and standard betas can be explained by the +#'conceptual difference in beta definitions: the standard beta accounts for +#'the fund returns over the whole return history, including the periods +#'when the market goes up, while CDaR betas focus only on market drawdowns +#'and, thus, are not affected when the market performs well. +#' +#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param Rm Return series of the optimal portfolio an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param p confidence level for calculation ,default(p=0.95) +#'@param weights portfolio weighting vector, default NULL, see Details +#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE +#' @param type The type of BetaDrawdown if specified alpha then the alpha value given is taken (default 0.95). If "average" then +#' alpha = 0 and if "max" then alpha = 1 is taken. +#'@param \dots any passthru variable. +#' +#'@references +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model +#'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University +#'of Florida,September 2012. +#' +#'@examples +#' +#'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 + +MultiBetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ + + # DESCRIPTION: + # + # The function is used to find the Drawdown Beta for multiple sample path. + # + # INPUT: + # The Return series of the portfolio and the optimal portfolio + # is taken as the input. + # + # OUTPUT: + # The Drawdown beta for multiple sample path is given as the output. + + + x = checkData(R) + xm = checkData(Rm) + columnnames = colnames(R) + columns = ncol(R) + drawdowns_m = Drawdowns(Rm) + type = type[1] + if(type=="average"){ + p = 0 + } + if(type == "max"){ + p = 1 + } + # if nr*p is not an integer + #f.obj = c(rep(0,nr),rep((1/(1-p))*(1/nr),nr),1) + drawdowns = -as.matrix(drawdowns_m) + # Optimization to define Q for the optimal portfolio + # The objective function is defined + f.obj = NULL + for(i in 1:sample){ + for(j in 1:nr){ + f.obj = c(f.obj,ps[i]*drawdowns[j,i]) + } + } + f.con = NULL + # constraint 1: ps.qst = 1 + for(i in 1:sample){ + for(j in 1:nr){ + f.con = c(f.con,ps[i]) + } + } + f.con = matrix(f.con,nrow =1) + f.dir = "==" + f.rhs = 1 + # constraint 2 : qst >= 0 + for(i in 1:sample){ + for(j in 1:nr){ + r<-rep(0,sample*nr) + r[(i-1)*sample+j] = 1 + f.con = rbind(f.con,r) + } + } + f.dir = c(f.dir,rep(">=",sample*nr)) + f.rhs = c(f.rhs,rep(0,sample*nr)) + + + # constraint 3 : qst =< 1/(1-alpha)*T + for(i in 1:sample){ + for(j in 1:nr){ + r<-rep(0,sample*nr) + r[(i-1)*sample+j] = 1 + f.con = rbind(f.con,r) + } + } + f.dir = c(f.dir,rep("<=",sample*nr)) + f.rhs = c(f.rhs,rep(1/(1-p)*nr,sample*nr)) + + val = lp("max",f.obj,f.con,f.dir,f.rhs) + q = matrix(val$solutions,ncol = sample) + + # TODO INCORPORATE WEIGHTS + + if(geometric){ + cumul_xm = cumprod(xm+1)-1 + } + else{ + cumul_xm = cumsum(xm) + } + # Function to calculate Drawdown beta for multipath + multiDDbeta<-function(x){ + boolean = (cummax(cumul_xm)==cumul_xm) + index = NULL + for(j in 1:nrow(Rm)){ + if(boolean[j] == TRUE){ + index = c(index,j) + b = j + } + else{ + index = c(index,b) + } + } + for(i in 1:sample){ + for(j in 1:nrow(x)){ + + beta_dd = (p[i]*q[j,i]*(x[index,i]-x[,i]))/CDaR(Rm,p=p) + return(beta_dd) + } + + result = matrix(nrow = 1, ncol = ncol(R)/sample) + + for (i in 1:(ncol(R)/sample)) { + ret<-NULL + for(j in 1:sample){ + ret<-cbind(ret,R[,(j-1)*ncol(R)/sample+i]) + } + result[i] <- multiDDbeta(ret) + } + + dim(result) = c(1, NCOL(R)/sample) + colnames(result) = colnames(R)[1:ncol(R)/sample] + rownames(result) = paste("Conditional Drawdown ", + + +} + + From noreply at r-forge.r-project.org Fri Aug 9 22:06:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 9 Aug 2013 22:06:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2752 - pkg/FactorAnalytics/vignettes Message-ID: <20130809200635.77165180536@r-forge.r-project.org> Author: chenyian Date: 2013-08-09 22:06:35 +0200 (Fri, 09 Aug 2013) New Revision: 2752 Added: pkg/FactorAnalytics/vignettes/equity.Rdata Log: add data set for vignette. Added: pkg/FactorAnalytics/vignettes/equity.Rdata =================================================================== (Binary files differ) Property changes on: pkg/FactorAnalytics/vignettes/equity.Rdata ___________________________________________________________________ Added: svn:mime-type + application/octet-stream From noreply at r-forge.r-project.org Sat Aug 10 03:11:44 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 03:11:44 +0200 (CEST) Subject: [Returnanalytics-commits] r2753 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: Week2/Code Week3/Code Week4/Code Week5/Code Message-ID: <20130810011144.432BB184288@r-forge.r-project.org> Author: shubhanm Date: 2013-08-10 03:11:43 +0200 (Sat, 10 Aug 2013) New Revision: 2753 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/CalmarRatio.Normalized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/EmaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/chart.Autocorrelation.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week4/Code/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDrawdown.R Log: Rexoygen readability of codes(week 1, 2, 3, ,4 ,5 ) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/CalmarRatio.Normalized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/CalmarRatio.Normalized.R 2013-08-09 20:06:35 UTC (rev 2752) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/CalmarRatio.Normalized.R 2013-08-10 01:11:43 UTC (rev 2753) @@ -132,6 +132,6 @@ # This R package is distributed under the terms of the GNU Public License (GPL) # for full details see the file COPYING # -# $Id: CalmarRatio.R 1955 2012-05-23 16:38:16Z braverock $ +# $Id: CalmarRatioNormalized.R 1955 2012-05-23 16:38:16Z braverock $ # ############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/Return.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/Return.GLM.R 2013-08-09 20:06:35 UTC (rev 2752) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week2/Code/Return.GLM.R 2013-08-10 01:11:43 UTC (rev 2753) @@ -1,3 +1,6 @@ +#' Getmansky Lo Markov Unsmooth Return Model +#' +#' #' True returns represent the flow of information that would determine the equilibrium #' value of the fund's securities in a frictionless market. However, true economic #' returns are not observed. Instead, Rot Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/EmaxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/EmaxDDGBM.R 2013-08-09 20:06:35 UTC (rev 2752) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/EmaxDDGBM.R 2013-08-10 01:11:43 UTC (rev 2753) @@ -9,7 +9,7 @@ #' @author R #' @keywords Expected Drawdown Using Brownian Motion Assumptions -#' +#' @rdname EmaxDDGBM #' @export table.EMaxDDGBM <- function (R,digits =4) @@ -182,13 +182,14 @@ } ############################################################################### -# R (http://r-project.org/) +################################################################################ +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis # -# Copyright (c) 2004-2013 +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson # # This R package is distributed under the terms of the GNU Public License (GPL) # for full details see the file COPYING # -# $Id: EMaxDDGBM +# $Id: EmaxDDGBM.R 2271 2012-09-02 01:56:23Z braverock $ # ############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/chart.Autocorrelation.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/chart.Autocorrelation.R 2013-08-09 20:06:35 UTC (rev 2752) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week3/Code/chart.Autocorrelation.R 2013-08-10 01:11:43 UTC (rev 2753) @@ -18,7 +18,7 @@ #' data(edhec[,1]) #' chart.Autocorrelation(edhec[,1]) #' -#' +#' @rdname chart.Autocorrelation #' @export chart.Autocorrelation <- function (R, ...) @@ -44,4 +44,15 @@ -} \ No newline at end of file +} +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: Chart.Autocorrelation.R 2271 2012-09-02 01:56:23Z braverock $ +# +############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week4/Code/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week4/Code/AcarSim.R 2013-08-09 20:06:35 UTC (rev 2752) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week4/Code/AcarSim.R 2013-08-10 01:11:43 UTC (rev 2753) @@ -1,8 +1,19 @@ -#To get some insight on the relationships between maximum drawdown per unit of volatility -#and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. -# We have simulated cash flows over a period of 36 monthly returns and measured maximum -#drawdown for varied levels of annualised return divided by volatility varying from minus -# two to two by step of 0.1. The process has been repeated six thousand times. +#' Acar and Shane Maximum Loss +#' +#'To get some insight on the relationships between maximum drawdown per unit of volatility +#'and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. +#' We have simulated cash flows over a period of 36 monthly returns and measured maximum +#'drawdown for varied levels of annualised return divided by volatility varying from minus +#' two to two by step of 0.1. The process has been repeated six thousand times. +#' @author R Project +#' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} +#' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 +#' @keywords Conditional Drawdown models +#' @examples +#' library(PerformanceAnalytics) +#' AcarSim() +#' @rdname Cdrawdown +#' @export AcarSim <- function() { @@ -58,4 +69,16 @@ title("Maximum Drawdown/Volatility as a function of Return/Volatility 36 monthly returns simulated 6,000 time") -} \ No newline at end of file +} + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: AcarSim.R 2163 2012-07-16 00:30:19Z braverock $ +# +############################################################################### \ No newline at end of file Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDrawdown.R 2013-08-09 20:06:35 UTC (rev 2752) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDrawdown.R 2013-08-10 01:11:43 UTC (rev 2753) @@ -1,30 +1,29 @@ -#' Conditional Drawdown +#' Chekhlov Conditional Drawdown at Risk #' #' A new one-parameter family of risk measures called Conditional Drawdown (CDD) has #'been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance -#' parameter ??, in the case of a single sample path, drawdown functional is de???ned as +#' parameter, in the case of a single sample path, drawdown functional is de???ned as #'the mean of the worst 100% drawdowns. The CDD measure generalizes the #'notion of the drawdown functional to a multi-scenario case and can be considered as a #'generalization of deviation measure to a dynamic case. The CDD measure includes the #'Maximal Drawdown and Average Drawdown as its limiting cases. #' +#' The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD +#' The model develops concept of drawdown measure by generalizing the notion +#' of the CDD to the case of several sample paths for portfolio uncompounded rate +#' of return. #' @param Ra return vector of the portfolio -#' @param Rb return vector of the benchmark asset -#' @param scale number of periods in a year (daily scale = 252, monthly scale = -#' 12, quarterly scale = 4) -#' @author Peter Carl +#' @param p confidence interval +#' @author R Project #' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} #' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 #' @keywords Conditional Drawdown models #' @examples #' -#' data(managers) -#' ActivePremium(managers[, "HAM1", drop=FALSE], managers[, "SP500 TR", drop=FALSE]) -#' ActivePremium(managers[,1,drop=FALSE], managers[,8,drop=FALSE]) -#' ActivePremium(managers[,1:6], managers[,8,drop=FALSE]) -#' ActivePremium(managers[,1:6], managers[,8:7,drop=FALSE]) -#' @rdname ActivePremium -#' @aliases ActivePremium, ActiveReturn +#' library(PerformanceAnalytics) +#' data(edhec) +#' CDrawdown(edhec) +#' @rdname Cdrawdown #' @export CDrawdown <- From noreply at r-forge.r-project.org Sat Aug 10 03:30:11 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 03:30:11 +0200 (CEST) Subject: [Returnanalytics-commits] r2754 - in pkg/FactorAnalytics: R vignettes Message-ID: <20130810013011.231171833B9@r-forge.r-project.org> Author: chenyian Date: 2013-08-10 03:30:10 +0200 (Sat, 10 Aug 2013) New Revision: 2754 Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/vignettes/equity.Rdata pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: update vignette and data set Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-10 01:11:43 UTC (rev 2753) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-10 01:30:10 UTC (rev 2754) @@ -138,16 +138,16 @@ require(robust) - assets = unique(data[,assetvar]) - timedates = as.Date(unique(data[,datevar])) + assets = unique(data[[assetvar]]) + timedates = as.Date(unique(data[[datevar]])) +# data[[datevar]] <- as.Date(data[[datevar]]) - if (length(timedates) < 2) stop("At least two time points, t and t-1, are needed for fitting the factor model.") - if (!is(exposure.names, "vector") || !is.character(exposure.names)) - stop("exposure argument invalid---must be character vector.") - if (!is(assets, "vector") || !is.character(assets)) - stop("assets argument invalid---must be character vector.") + if (!is(exposure.names, "vector") || !is.character(exposure.names)) + stop("exposure argument invalid---must be character vector.") + if (!is(assets, "vector") || !is.character(assets)) + stop("assets argument invalid---must be character vector.") wls <- as.logical(wls) full.resid.cov <- as.logical(full.resid.cov) @@ -279,8 +279,7 @@ else stop(mess) } tstat <- rep(NA, length(model$coef)) - tstat[!is.na(model$coef)] <- summary(model, cor = FALSE)$coef[, - 3] + tstat[!is.na(model$coef)] <- summary(model, cor = FALSE)$coef[,3] alphaord <- order(names(model$coef)) c(length(model$coef), model$coef[alphaord], tstat[alphaord], model$resid) @@ -319,7 +318,8 @@ FE.hat <- by(data = data, INDICES = as.numeric(data[[datevar]]), FUN = wls.robust, modelterms = regression.formula, conlist = contrasts.list, w = weights) - } else { + } + else { # wls.classic resids <- by(data = data, INDICES = as.numeric(data[[datevar]]), FUN = function(xdf, modelterms, conlist) { Modified: pkg/FactorAnalytics/vignettes/equity.Rdata =================================================================== (Binary files differ) Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-10 01:11:43 UTC (rev 2753) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-10 01:30:10 UTC (rev 2754) @@ -41,13 +41,49 @@ We download data from CRSP/Compustat quarterly fundamental and name \verb at equity@ which contains 67 stocks from January 2000 to Decenmber 2013. <>= -equity <- read.csv(file="equity.csv") +equity <- data(equity) names(equity) length(unique(equity$datadate)) # number of period t length(unique(equity$tic)) # number of assets @ -We want return +We want asset returns. +<>= +library(quantmod) # for Delt. See Delt for detail +equity <- cbind(equity,do.call(rbind,lapply(split(equity,equity$tic), + function(x) Delt(x$PRCCQ)))) +names(equity)[22] <- "RET" +@ +We want market value and book-to-market ratio too. market vale can be achieved by commom stocks outstading x price and book value we use commom/ordinary equity value. +<>== +equity$MV <- equity$PRCCQ*equity$CSHOQ +equity$BM <- equity$CEQQ/equity$MV +@ +now we use model \ref{ffm} where K=2, b = [ MV , BM ]. +We will get an error message if \verb at datevar@ is not \verb at as.Date@ format compatible. In our example, our date variable is \emph{DATACQTR} and looks like "2000Q1". We have to convert it to \verb at as.Date@ compatible. We can utilize \verb at as.yearqtr@ to do it. Aslo, we will use character string for asset variable instead of factor. +<>= +a <- unlist( lapply(strsplit(as.character(equity$DATACQTR),"Q"), + function(x) paste(x[[1]],"-",x[[2]],sep="") ) ) +equity$yearqtr <- as.yearqtr(a,format="%Y-%q") +equity$tic <- as.character(equity$tic) +equity <- subset(equity,yearqtr != "2000 Q1") +# delete the first element of each assets +@ +fit the function: +<>= +fit.fund <- fitFundamentalFactorModel(exposure.names=c("BM","MV"),datevar="yearqtr", + returnsvar ="RET",assetvar="tic",wls=TRUE,data=equity) +names(fit.fund) +@ + + + + + + + + + \end{document} \ No newline at end of file From noreply at r-forge.r-project.org Sat Aug 10 03:31:32 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 03:31:32 +0200 (CEST) Subject: [Returnanalytics-commits] r2755 - in pkg/FactorAnalytics: R man Message-ID: <20130810013132.D3DED1833B9@r-forge.r-project.org> Author: chenyian Date: 2013-08-10 03:31:32 +0200 (Sat, 10 Aug 2013) New Revision: 2755 Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd Log: edit fitFundamentalFactorModel.Rd Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-10 01:30:10 UTC (rev 2754) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-10 01:31:32 UTC (rev 2755) @@ -22,7 +22,7 @@ #' #' @param data data.frame, data must have \emph{assetvar}, \emph{returnvar}, \emph{datevar} #' , and exposure.names. Generally, data is panel data setup, so it needs firm variabales -#' and time variables. +#' and time variables. Data has to be a balanced panel. #' @param exposure.names a character vector of exposure names for the factor model #' @param wls logical flag, TRUE for weighted least squares, FALSE for ordinary #' least squares Modified: pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-10 01:30:10 UTC (rev 2754) +++ pkg/FactorAnalytics/man/fitFundamentalFactorModel.Rd 2013-08-10 01:31:32 UTC (rev 2755) @@ -12,7 +12,8 @@ \item{data}{data.frame, data must have \emph{assetvar}, \emph{returnvar}, \emph{datevar} , and exposure.names. Generally, data is panel data setup, so it needs firm - variabales and time variables.} + variabales and time variables. Data has to be a balanced + panel.} \item{exposure.names}{a character vector of exposure names for the factor model} From noreply at r-forge.r-project.org Sat Aug 10 03:36:03 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 03:36:03 +0200 (CEST) Subject: [Returnanalytics-commits] r2756 - pkg/FactorAnalytics/vignettes Message-ID: <20130810013603.D88281833B9@r-forge.r-project.org> Author: chenyian Date: 2013-08-10 03:36:03 +0200 (Sat, 10 Aug 2013) New Revision: 2756 Added: pkg/FactorAnalytics/vignettes/equity.csv Log: add equity csv file. Added: pkg/FactorAnalytics/vignettes/equity.csv =================================================================== --- pkg/FactorAnalytics/vignettes/equity.csv (rev 0) +++ pkg/FactorAnalytics/vignettes/equity.csv 2013-08-10 01:36:03 UTC (rev 2756) @@ -0,0 +1,3340 @@ +gvkey,datadate,fyearq,fqtr,indfmt,consol,popsrc,datafmt,tic,conm,CURCDQ,DATACQTR,DATAFQTR,CEQQ,CSHOQ,COSTAT,DVPSPQ,MKVALTQ,PRCCQ,SPCINDCD,SPCSECCD +1356,3/31/2000,2000,1,INDL,C,D,STD,AA,ALCOA INC,USD,2000Q1,2000Q1,5941,369.486,A,0.25,,70.25,115,970 +1356,6/30/2000,2000,2,INDL,C,D,STD,AA,ALCOA INC,USD,2000Q2,2000Q2,10984,866.001,A,0.125,,29,115,970 +1356,9/30/2000,2000,3,INDL,C,D,STD,AA,ALCOA INC,USD,2000Q3,2000Q3,10938,864.961,A,0.125,,25.3125,115,970 +1356,12/31/2000,2000,4,INDL,C,D,STD,AA,ALCOA INC,USD,2000Q4,2000Q4,11366,865.517,A,0.125,,33.5,115,970 +1356,3/31/2001,2001,1,INDL,C,D,STD,AA,ALCOA INC,USD,2001Q1,2001Q1,10963,863.168,A,0.15,,35.95,115,970 +1356,6/30/2001,2001,2,INDL,C,D,STD,AA,ALCOA INC,USD,2001Q2,2001Q2,10986,857.667,A,0.15,,39.4,115,970 +1356,9/30/2001,2001,3,INDL,C,D,STD,AA,ALCOA INC,USD,2001Q3,2001Q3,10762,849.303,A,0.15,,31.01,115,970 +1356,12/31/2001,2001,4,INDL,C,D,STD,AA,ALCOA INC,USD,2001Q4,2001Q4,10558,847.582,A,0.15,,35.55,115,970 +1356,3/31/2002,2002,1,INDL,C,D,STD,AA,ALCOA INC,USD,2002Q1,2002Q1,10463,846.647,A,0.15,,37.74,115,970 +1356,6/30/2002,2002,2,INDL,C,D,STD,AA,ALCOA INC,USD,2002Q2,2002Q2,10828,844.427,A,0.15,,33.15,115,970 +1356,9/30/2002,2002,3,INDL,C,D,STD,AA,ALCOA INC,USD,2002Q3,2002Q3,10673,844.244,A,0.15,,19.3,115,970 +1356,12/31/2002,2002,4,INDL,C,D,STD,AA,ALCOA INC,USD,2002Q4,2002Q4,9872,844.819,A,0.15,,22.78,115,970 +1356,3/31/2003,2003,1,INDL,C,D,STD,AA,ALCOA INC,USD,2003Q1,2003Q1,10028,845.157,A,0.15,,19.38,115,970 +1356,6/30/2003,2003,2,INDL,C,D,STD,AA,ALCOA INC,USD,2003Q2,2003Q2,10334,846.084,A,0.15,,25.5,115,970 +1356,9/30/2003,2003,3,INDL,C,D,STD,AA,ALCOA INC,USD,2003Q3,2003Q3,11062,865.355,A,0.15,,26.16,115,970 +1356,12/31/2003,2003,4,INDL,C,D,STD,AA,ALCOA INC,USD,2003Q4,2003Q4,12020,868.491,A,0.15,,38,115,970 +1356,3/31/2004,2004,1,INDL,C,D,STD,AA,ALCOA INC,USD,2004Q1,2004Q1,12239,869.411,A,0.15,,34.69,115,970 +1356,6/30/2004,2004,2,INDL,C,D,STD,AA,ALCOA INC,USD,2004Q2,2004Q2,12220,869.836,A,0.15,,33.03,115,970 +1356,9/30/2004,2004,3,INDL,C,D,STD,AA,ALCOA INC,USD,2004Q3,2004Q3,12500,870.479,A,0.15,,33.59,115,970 +1356,12/31/2004,2004,4,INDL,C,D,STD,AA,ALCOA INC,USD,2004Q4,2004Q4,13245,870.98,A,0.15,,31.42,115,970 +1356,3/31/2005,2005,1,INDL,C,D,STD,AA,ALCOA INC,USD,2005Q1,2005Q1,13348,872.011,A,0.15,,30.39,115,970 +1356,6/30/2005,2005,2,INDL,C,D,STD,AA,ALCOA INC,USD,2005Q2,2005Q2,13301,872.247,A,0.15,,26.13,115,970 +1356,9/30/2005,2005,3,INDL,C,D,STD,AA,ALCOA INC,USD,2005Q3,2005Q3,13590,872.707,A,0.15,,24.42,115,970 +1356,12/31/2005,2005,4,INDL,C,D,STD,AA,ALCOA INC,USD,2005Q4,2005Q4,13318,870.269,A,0.15,,29.57,115,970 +1356,3/31/2006,2006,1,INDL,C,D,STD,AA,ALCOA INC,USD,2006Q1,2006Q1,13874,870.101,A,0.15,26590.2866,30.56,115,970 +1356,6/30/2006,2006,2,INDL,C,D,STD,AA,ALCOA INC,USD,2006Q2,2006Q2,14365,866.888,A,0.15,28052.4957,32.36,115,970 +1356,9/30/2006,2006,3,INDL,C,D,STD,AA,ALCOA INC,USD,2006Q3,2006Q3,14650,867.078,A,0.15,24312.8671,28.04,115,970 +1356,12/31/2006,2006,4,INDL,C,D,STD,AA,ALCOA INC,USD,2006Q4,2006Q4,14576,867.74,A,0.15,26040.8774,30.01,115,970 +1356,3/31/2007,2007,1,INDL,C,D,STD,AA,ALCOA INC,USD,2007Q1,2007Q1,15367,868.989,A,0.17,29458.7271,33.9,115,970 +1356,6/30/2007,2007,2,INDL,C,D,STD,AA,ALCOA INC,USD,2007Q2,2007Q2,16777,884.034,A,0.17,35829.898,40.53,115,970 +1356,9/30/2007,2007,3,INDL,C,D,STD,AA,ALCOA INC,USD,2007Q3,2007Q3,15859,848.148,A,0.17,33179.5498,39.12,115,970 +1356,12/31/2007,2007,4,INDL,C,D,STD,AA,ALCOA INC,USD,2007Q4,2007Q4,15961,827.402,A,0.17,30241.5431,36.55,115,970 +1356,3/31/2008,2008,1,INDL,C,D,STD,AA,ALCOA INC,USD,2008Q1,2008Q1,15758,815.005,A,0.17,29389.0803,36.06,115,970 +1356,6/30/2008,2008,2,INDL,C,D,STD,AA,ALCOA INC,USD,2008Q2,2008Q2,16647,815.304,A,0.17,29041.1285,35.62,115,970 +1356,9/30/2008,2008,3,INDL,C,D,STD,AA,ALCOA INC,USD,2008Q3,2008Q3,15010,800.317,A,0.17,18071.1579,22.58,115,970 +1356,12/31/2008,2008,4,INDL,C,D,STD,AA,ALCOA INC,USD,2008Q4,2008Q4,11680,800.317,A,0.17,9011.5694,11.26,115,970 +1356,3/31/2009,2009,1,INDL,C,D,STD,AA,ALCOA INC,USD,2009Q1,2009Q1,12382,974.275,A,0.17,7151.1785,7.34,115,970 +1356,6/30/2009,2009,2,INDL,C,D,STD,AA,ALCOA INC,USD,2009Q2,2009Q2,12762,974.287,A,0,10064.3847,10.33,115,970 +1356,9/30/2009,2009,3,INDL,C,D,STD,AA,ALCOA INC,USD,2009Q3,2009Q3,13179,974.377,A,0.03,12783.8262,13.12,115,970 +1356,12/31/2009,2009,4,INDL,C,D,STD,AA,ALCOA INC,USD,2009Q4,2009Q4,12365,974.379,A,0.03,15706.9895,16.12,115,970 +1356,3/31/2010,2010,1,INDL,C,D,STD,AA,ALCOA INC,USD,2010Q1,2010Q1,12614,1020.819,A,0.03,14536.4626,14.24,115,970 +1356,6/30/2010,2010,2,INDL,C,D,STD,AA,ALCOA INC,USD,2010Q2,2010Q2,12554,1021.204,A,0.03,10273.3122,10.06,115,970 +1356,9/30/2010,2010,3,INDL,C,D,STD,AA,ALCOA INC,USD,2010Q3,2010Q3,13298,1021.35,A,0.03,12368.5485,12.11,115,970 +1356,12/31/2010,2010,4,INDL,C,D,STD,AA,ALCOA INC,USD,2010Q4,2010Q4,13556,1022.026,A,0.03,15728.9801,15.39,115,970 +1356,3/31/2011,2011,1,INDL,C,D,STD,AA,ALCOA INC,USD,2011Q1,2011Q1,14719,1063.466,A,0.03,18780.8096,17.66,115,970 +1356,6/30/2011,2011,2,INDL,C,D,STD,AA,ALCOA INC,USD,2011Q2,2011Q2,15597,1064.104,A,0.03,16876.6894,15.86,115,970 +1356,9/30/2011,2011,3,INDL,C,D,STD,AA,ALCOA INC,USD,2011Q3,2011Q3,14864,1064.276,A,0.03,10185.1213,9.57,115,970 +1356,12/31/2011,2011,4,INDL,C,D,STD,AA,ALCOA INC,USD,2011Q4,2011Q4,13789,1064.412,A,0.03,9207.1638,8.65,115,970 +1356,3/31/2012,2012,1,INDL,C,D,STD,AA,ALCOA INC,USD,2012Q1,2012Q1,14087,1066.594,A,0.03,10687.2719,10.02,115,970 +1356,6/30/2012,2012,2,INDL,C,D,STD,AA,ALCOA INC,USD,2012Q2,2012Q2,13603,1066.882,A,0.03,9335.2175,8.75,115,970 +1356,9/30/2012,2012,3,INDL,C,D,STD,AA,ALCOA INC,USD,2012Q3,2012Q3,13515,1067.153,A,0.03,9449.6398,8.855,115,970 +1356,12/31/2012,2012,4,INDL,C,D,STD,AA,ALCOA INC,USD,2012Q4,2012Q4,13144,1067.212,A,0.03,9263.4002,8.68,115,970 +1356,3/31/2013,2013,1,INDL,C,D,STD,AA,ALCOA INC,USD,2013Q1,2013Q1,13366,1069.378,A,0.03,9111.1006,8.52,115,970 +1078,3/31/2000,2000,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2000Q1,2000Q1,7873.567,1549.132,A,0.17,,35.1875,280,905 +1078,6/30/2000,2000,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2000Q2,2000Q2,8184.394,1549.931,A,0.19,,44.5625,280,905 +1078,9/30/2000,2000,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2000Q3,2000Q3,8313.661,1546.428,A,0.19,,47.5625,280,905 +1078,12/31/2000,2000,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2000Q4,2000Q4,8570.906,1545.934,A,0.19,,48.4375,280,905 +1078,3/31/2001,2001,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2001Q1,2001Q1,8099.494,1548.255,A,0.19,,47.19,280,905 +1078,6/30/2001,2001,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2001Q2,2001Q2,8558.402,1550.907,A,0.21,,48,280,905 +1078,9/30/2001,2001,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2001Q3,2001Q3,8671.065,1552.237,A,0.21,,51.85,280,905 +1078,12/31/2001,2001,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2001Q4,2001Q4,9059.432,1554.53,A,0.21,,55.75,280,905 +1078,3/31/2002,2002,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2002Q1,2002Q1,9445.803,1560.614,A,0.21,,52.6,280,905 +1078,6/30/2002,2002,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2002Q2,2002Q2,9888.796,1562.131,A,0.235,,37.65,280,905 +1078,9/30/2002,2002,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2002Q3,2002Q3,10547.173,1562.541,A,0.235,,40.4,280,905 +1078,12/31/2002,2002,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2002Q4,2002Q4,10664.553,1563.068,A,0.235,,40,280,905 +1078,3/31/2003,2003,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2003Q1,2003Q1,11371.407,1560.968,A,0.235,,37.61,280,905 +1078,6/30/2003,2003,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2003Q2,2003Q2,11892.836,1562.56,A,0.245,,43.76,280,905 +1078,9/30/2003,2003,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2003Q3,2003Q3,11864.787,1563.353,A,0.245,,42.55,280,905 +1078,12/31/2003,2003,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2003Q4,2003Q4,13072.258,1564.518,A,0.245,,46.6,280,905 +1078,3/31/2004,2004,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2004Q1,2004Q1,13512.042,1559.676,A,0.245,,41.1,280,905 +1078,6/30/2004,2004,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2004Q2,2004Q2,12825.866,1561.157,A,0.26,,40.76,280,905 +1078,9/30/2004,2004,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2004Q3,2004Q3,12955.344,1557.392,A,0.26,,42.36,280,905 +1078,12/31/2004,2004,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2004Q4,2004Q4,14325.783,1560.024,A,0.26,,46.65,280,905 +1078,3/31/2005,2005,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2005Q1,2005Q1,14177.548,1550.677,A,0.26,,46.62,280,905 +1078,6/30/2005,2005,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2005Q2,2005Q2,14381.911,1554.416,A,0.275,,49.01,280,905 +1078,9/30/2005,2005,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2005Q3,2005Q3,14263.031,1551.228,A,0.275,,42.4,280,905 +1078,12/31/2005,2005,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2005Q4,2005Q4,14415.271,1539.235,A,0.275,,39.43,280,905 +1078,3/31/2006,2006,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2006Q1,2006Q1,14442.145,1526.217,A,0.275,64818.436,42.47,280,905 +1078,6/30/2006,2006,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2006Q2,2006Q2,15185.506,1527.807,A,0.295,66627.6633,43.61,280,905 +1078,9/30/2006,2006,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2006Q3,2006Q3,15633.401,1534.864,A,0.295,74532.9958,48.56,280,905 +1078,12/31/2006,2006,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2006Q4,2006Q4,14054.186,1537.243,A,0.295,74879.1065,48.71,280,905 +1078,3/31/2007,2007,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2007Q1,2007Q1,14319.141,1540.36,A,0.295,85952.088,55.8,280,905 +1078,6/30/2007,2007,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2007Q2,2007Q2,15334.153,1545.453,A,0.325,82759.0082,53.55,280,905 +1078,9/30/2007,2007,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2007Q3,2007Q3,15608.16,1545.273,A,0.325,82857.5383,53.62,280,905 +1078,12/31/2007,2007,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2007Q4,2007Q4,17778.54,1549.91,A,0.325,87027.4465,56.15,280,905 +1078,3/31/2008,2008,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2008Q1,2008Q1,17979.544,1543.296,A,0.325,85112.7744,55.15,280,905 +1078,6/30/2008,2008,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2008Q2,2008Q2,18958.802,1541.547,A,0.36,81655.7446,52.97,280,905 +1078,9/30/2008,2008,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2008Q3,2008Q3,19345.835,1551.582,A,0.36,89340.0916,57.58,280,905 +1078,12/31/2008,2008,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2008Q4,2008Q4,17479.551,1552.433,A,0.36,82853.3492,53.37,280,905 +1078,3/31/2009,2009,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2009Q1,2009Q1,17943.299,1545.459,A,0.36,73718.3943,47.7,280,905 +1078,6/30/2009,2009,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2009Q2,2009Q2,19879.112,1545.913,A,0.4,72719.7475,47.04,280,905 +1078,9/30/2009,2009,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2009Q3,2009Q3,21335.055,1546.738,A,0.4,76517.1289,49.47,280,905 +1078,12/31/2009,2009,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2009Q4,2009Q4,22855.627,1551.168,A,0.4,83747.5603,53.99,280,905 +1078,3/31/2010,2010,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2010Q1,2010Q1,20860.099,1543.565,A,0.4,81315.0042,52.68,280,905 +1078,6/30/2010,2010,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2010Q2,2010Q2,19911.709,1544.029,A,0.44,72229.6766,46.78,280,905 +1078,9/30/2010,2010,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2010Q3,2010Q3,21403.932,1545.816,A,0.44,80753.4278,52.24,280,905 +1078,12/31/2010,2010,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2010Q4,2010Q4,22388.135,1546.984,A,0.44,74116.0034,47.91,280,905 +1078,3/31/2011,2011,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2011Q1,2011Q1,24629.65,1554.283,A,0.44,76237.5812,49.05,280,905 +1078,6/30/2011,2011,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2011Q2,2011Q2,26378.853,1556.577,A,0.48,81907.0817,52.62,280,905 +1078,9/30/2011,2011,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2011Q3,2011Q3,24603.8,1557.796,A,0.48,79665.6874,51.14,280,905 +1078,12/31/2011,2011,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2011Q4,2011Q4,24439.833,1570.379,A,0.48,88302.4112,56.23,280,905 +1078,3/31/2012,2012,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2012Q1,2012Q1,25453.691,1573.392,A,0.48,96433.1957,61.29,280,905 +1078,6/30/2012,2012,2,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2012Q2,2012Q2,24486.96,1569.334,A,0.51,101174.963,64.47,280,905 +1078,9/30/2012,2012,3,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2012Q3,2012Q3,27014.175,1580.668,A,0.51,108370.5981,68.56,280,905 +1078,12/31/2012,2012,4,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2012Q4,2012Q4,26720.961,1576.667,A,0.51,103271.6885,65.5,280,905 +1078,3/31/2013,2013,1,INDL,C,D,STD,ABT,ABBOTT LABORATORIES,USD,2013Q1,2013Q1,22588.186,1558.865,A,0.14,55059.1118,35.32,280,905 +1440,3/31/2000,2000,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2000Q1,2000Q1,4972,194.103,A,0.6,,29.8125,705,700 +1440,6/30/2000,2000,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2000Q2,2000Q2,8254,321.993,A,0.6,,29.625,705,700 +1440,9/30/2000,2000,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2000Q3,2000Q3,8417,321.993,A,0.6,,39.125,705,700 +1440,12/31/2000,2000,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2000Q4,2000Q4,8054,322.019,A,0.6,,46.5,705,700 +1440,3/31/2001,2001,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2001Q1,2001Q1,8057,322.095,A,0.6,,47,705,700 +1440,6/30/2001,2001,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2001Q2,2001Q2,8148,322.201,A,0.6,,46.17,705,700 +1440,9/30/2001,2001,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2001Q3,2001Q3,8379,322.202,A,0.6,,43.23,705,700 +1440,12/31/2001,2001,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2001Q4,2001Q4,8229,322.235,A,0.6,,43.53,705,700 +1440,3/31/2002,2002,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2002Q1,2002Q1,8186,322.619,A,0.6,,46.09,705,700 +1440,6/30/2002,2002,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2002Q2,2002Q2,8384,338.834,A,0.6,,40.02,705,700 +1440,9/30/2002,2002,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2002Q3,2002Q3,8666,338.835,A,0.6,,28.51,705,700 +1440,12/31/2002,2002,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2002Q4,2002Q4,7064,338.835,A,0.6,,27.33,705,700 +1440,3/31/2003,2003,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2003Q1,2003Q1,8437,394.993,A,0.6,,22.85,705,700 +1440,6/30/2003,2003,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2003Q2,2003Q2,8413,395.002,A,0.35,,29.83,705,700 +1440,9/30/2003,2003,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2003Q3,2003Q3,8458,395.005,A,0.35,,30,705,700 +1440,12/31/2003,2003,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2003Q4,2003Q4,7874,395.016,A,0.35,,30.51,705,700 +1440,3/31/2004,2004,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2004Q1,2004Q1,8071,395.643,A,0.35,,32.92,705,700 +1440,6/30/2004,2004,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2004Q2,2004Q2,8083,395.658,A,0.35,,32,705,700 +1440,9/30/2004,2004,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2004Q3,2004Q3,8283,395.696,A,0.35,,31.96,705,700 +1440,12/31/2004,2004,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2004Q4,2004Q4,8515,395.858,A,0.35,,34.34,705,700 +1440,3/31/2005,2005,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2005Q1,2005Q1,8268,383.933,A,0.35,,34.06,705,700 +1440,6/30/2005,2005,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2005Q2,2005Q2,8382,384.397,A,0.35,,36.87,705,700 +1440,9/30/2005,2005,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2005Q3,2005Q3,8985,393.46,A,0.35,,39.7,705,700 +1440,12/31/2005,2005,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2005Q4,2005Q4,9088,393.719,A,0.37,,37.09,705,700 +1440,3/31/2006,2006,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2006Q1,2006Q1,9384,393.912,A,0.37,13400.8862,34.02,705,700 +1440,6/30/2006,2006,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2006Q2,2006Q2,9426,393.947,A,0.37,13492.6848,34.25,705,700 +1440,9/30/2006,2006,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2006Q3,2006Q3,9525,394.48,A,0.37,14347.2376,36.37,705,700 +1440,12/31/2006,2006,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2006Q4,2006Q4,9412,396.675,A,0.39,16890.4215,42.58,705,700 +1440,3/31/2007,2007,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2007Q1,2007Q1,9540,398.168,A,0.39,19410.69,48.75,705,700 +1440,6/30/2007,2007,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2007Q2,2007Q2,9656,399.19,A,0.39,17979.5176,45.04,705,700 +1440,9/30/2007,2007,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2007Q3,2007Q3,9909,399.829,A,0.39,18424.1203,46.08,705,700 +1440,12/31/2007,2007,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2007Q4,2007Q4,10079,400.427,A,0.41,18643.8811,46.56,705,700 +1440,3/31/2008,2008,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2008Q1,2008Q1,10489,401.505,A,0.41,16714.6532,41.63,705,700 +1440,6/30/2008,2008,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2008Q2,2008Q2,10631,402.135,A,0.41,16177.8911,40.23,705,700 +1440,9/30/2008,2008,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2008Q3,2008Q3,10917,403.039,A,0.41,14924.5342,37.03,705,700 +1440,12/31/2008,2008,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2008Q4,2008Q4,10693,406.071,A,0.41,13514.0429,33.28,705,700 +1440,3/31/2009,2009,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2009Q1,2009Q1,10940,407.761,A,0.41,10300.0429,25.26,705,700 +1440,6/30/2009,2009,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2009Q2,2009Q2,12745,476.783,A,0.41,13774.2609,28.89,705,700 +1440,9/30/2009,2009,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2009Q3,2009Q3,13064,477.399,A,0.41,14794.595,30.99,705,700 +1440,12/31/2009,2009,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2009Q4,2009Q4,13140,478.054,A,0.41,16631.4987,34.79,705,700 +1440,3/31/2010,2010,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2010Q1,2010Q1,13324,478.855,A,0.41,16367.2639,34.18,705,700 +1440,6/30/2010,2010,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2010Q2,2010Q2,13269,479.376,A,0.42,15483.8448,32.3,705,700 +1440,9/30/2010,2010,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2010Q3,2010Q3,13656,480.041,A,0.42,17391.8854,36.23,705,700 +1440,12/31/2010,2010,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2010Q4,2010Q4,13622,480.807,A,0.46,17299.4359,35.98,705,700 +1440,3/31/2011,2011,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2011Q1,2011Q1,13779,481.702,A,0.46,16927.0083,35.14,705,700 +1440,6/30/2011,2011,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2011Q2,2011Q2,13939,482.227,A,0.46,18170.3134,37.68,705,700 +1440,9/30/2011,2011,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2011Q3,2011Q3,14653,482.869,A,0.46,18358.6794,38.02,705,700 +1440,12/31/2011,2011,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2011Q4,2011Q4,14664,483.423,A,0.47,19970.2041,41.31,705,700 +1440,3/31/2012,2012,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2012Q1,2012Q1,14856,484.23,A,0.47,18681.5934,38.58,705,700 +1440,6/30/2012,2012,2,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2012Q2,2012Q2,15007,484.828,A,0.47,19344.6372,39.9,705,700 +1440,9/30/2012,2012,3,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2012Q3,2012Q3,15306,485.216,A,0.47,21320.391,43.94,705,700 +1440,12/31/2012,2012,4,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2012Q4,2012Q4,15237,485.668,A,0.47,20728.3102,42.68,705,700 +1440,3/31/2013,2013,1,INDL,C,D,STD,AEP,AMERICAN ELECTRIC POWER CO,USD,2013Q1,2013Q1,15421,486.02,A,0.47,23635.1526,48.63,705,700 +2285,3/31/2000,2000,1,INDL,C,D,STD,BA,BOEING CO,USD,2000Q1,2000Q1,11855,868.825,A,0.14,,37.8125,110,925 +2285,6/30/2000,2000,2,INDL,C,D,STD,BA,BOEING CO,USD,2000Q2,2000Q2,12049,862.853,A,0.14,,41.8125,110,925 +2285,9/30/2000,2000,3,INDL,C,D,STD,BA,BOEING CO,USD,2000Q3,2000Q3,12153,855.119,A,0.14,,63,110,925 +2285,12/31/2000,2000,4,INDL,C,D,STD,BA,BOEING CO,USD,2000Q4,2000Q4,11020,836.329,A,0.14,,66,110,925 +2285,3/31/2001,2001,1,INDL,C,D,STD,BA,BOEING CO,USD,2001Q1,2001Q1,12218,835,A,0.17,,55.71,110,925 +2285,6/30/2001,2001,2,INDL,C,D,STD,BA,BOEING CO,USD,2001Q2,2001Q2,11495,813.62,A,0.17,,55.6,110,925 +2285,9/30/2001,2001,3,INDL,C,D,STD,BA,BOEING CO,USD,2001Q3,2001Q3,11304,797.937,A,0.17,,33.5,110,925 +2285,12/31/2001,2001,4,INDL,C,D,STD,BA,BOEING CO,USD,2001Q4,2001Q4,10825,797.889,A,0.17,,38.78,110,925 +2285,3/31/2002,2002,1,INDL,C,D,STD,BA,BOEING CO,USD,2002Q1,2002Q1,9753,798.873,A,0.17,,48.25,110,925 +2285,6/30/2002,2002,2,INDL,C,D,STD,BA,BOEING CO,USD,2002Q2,2002Q2,10402,799.084,A,0.17,,45,110,925 +2285,9/30/2002,2002,3,INDL,C,D,STD,BA,BOEING CO,USD,2002Q3,2002Q3,10903,799.204,A,0.17,,34.13,110,925 +2285,12/31/2002,2002,4,INDL,C,D,STD,BA,BOEING CO,USD,2002Q4,2002Q4,7696,799.661,A,0.17,,32.99,110,925 +2285,3/31/2003,2003,1,INDL,C,D,STD,BA,BOEING CO,USD,2003Q1,2003Q1,7284,800.076,A,0.17,,25.06,110,925 +2285,6/30/2003,2003,2,INDL,C,D,STD,BA,BOEING CO,USD,2003Q2,2003Q2,6966,800.099,A,0.17,,34.32,110,925 +2285,9/30/2003,2003,3,INDL,C,D,STD,BA,BOEING CO,USD,2003Q3,2003Q3,7350,800.107,A,0.17,,34.33,110,925 +2285,12/31/2003,2003,4,INDL,C,D,STD,BA,BOEING CO,USD,2003Q4,2003Q4,8139,800.283,A,0.17,,42.14,110,925 +2285,3/31/2004,2004,1,INDL,C,D,STD,BA,BOEING CO,USD,2004Q1,2004Q1,8859,801.675,A,0.17,,41.07,110,925 +2285,6/30/2004,2004,2,INDL,C,D,STD,BA,BOEING CO,USD,2004Q2,2004Q2,8949,798.988,A,0.2,,51.09,110,925 +2285,9/30/2004,2004,3,INDL,C,D,STD,BA,BOEING CO,USD,2004Q3,2004Q3,9542,800.778,A,0.2,,51.62,110,925 +2285,12/31/2004,2004,4,INDL,C,D,STD,BA,BOEING CO,USD,2004Q4,2004Q4,11286,793.202,A,0.2,,51.77,110,925 +2285,3/31/2005,2005,1,INDL,C,D,STD,BA,BOEING CO,USD,2005Q1,2005Q1,11478,787.01,A,0.25,,58.46,110,925 +2285,6/30/2005,2005,2,INDL,C,D,STD,BA,BOEING CO,USD,2005Q2,2005Q2,11132,780.334,A,0.25,,66,110,925 +2285,9/30/2005,2005,3,INDL,C,D,STD,BA,BOEING CO,USD,2005Q3,2005Q3,9423,770.594,A,0.25,,67.95,110,925 +2285,12/31/2005,2005,4,INDL,C,D,STD,BA,BOEING CO,USD,2005Q4,2005Q4,11059,760.577,A,0.25,,70.24,110,925 +2285,3/31/2006,2006,1,INDL,C,D,STD,BA,BOEING CO,USD,2006Q1,2006Q1,11517,760.55,A,0.3,59269.6615,77.93,110,925 +2285,6/30/2006,2006,2,INDL,C,D,STD,BA,BOEING CO,USD,2006Q2,2006Q2,10369,758.911,A,0.3,62162.4,81.91,110,925 +2285,9/30/2006,2006,3,INDL,C,D,STD,BA,BOEING CO,USD,2006Q3,2006Q3,11026,760.576,A,0.3,59971.4176,78.85,110,925 +2285,12/31/2006,2006,4,INDL,C,D,STD,BA,BOEING CO,USD,2006Q4,2006Q4,4739,757.836,A,0.3,67326.1502,88.84,110,925 +2285,3/31/2007,2007,1,INDL,C,D,STD,BA,BOEING CO,USD,2007Q1,2007Q1,5611,756.706,A,0.35,67278.7305,88.91,110,925 +2285,6/30/2007,2007,2,INDL,C,D,STD,BA,BOEING CO,USD,2007Q2,2007Q2,5857,754.026,A,0.35,72507.1402,96.16,110,925 +2285,9/30/2007,2007,3,INDL,C,D,STD,BA,BOEING CO,USD,2007Q3,2007Q3,6406,745.947,A,0.35,78316.9755,104.99,110,925 +2285,12/31/2007,2007,4,INDL,C,D,STD,BA,BOEING CO,USD,2007Q4,2007Q4,9004,736.681,A,0.35,64430.1203,87.46,110,925 +2285,3/31/2008,2008,1,INDL,C,D,STD,BA,BOEING CO,USD,2008Q1,2008Q1,9066,722.413,A,0.4,53725.8548,74.37,110,925 +2285,6/30/2008,2008,2,INDL,C,D,STD,BA,BOEING CO,USD,2008Q2,2008Q2,8594,711.729,A,0.4,46774.8299,65.72,110,925 +2285,9/30/2008,2008,3,INDL,C,D,STD,BA,BOEING CO,USD,2008Q3,2008Q3,8714,705.951,A,0.4,40486.2899,57.35,110,925 +2285,12/31/2008,2008,4,INDL,C,D,STD,BA,BOEING CO,USD,2008Q4,2008Q4,-1294,698.138,A,0.4,29789.5485,42.67,110,925 +2285,3/31/2009,2009,1,INDL,C,D,STD,BA,BOEING CO,USD,2009Q1,2009Q1,-662,697.302,A,0.42,24810.0052,35.58,110,925 +2285,6/30/2009,2009,2,INDL,C,D,STD,BA,BOEING CO,USD,2009Q2,2009Q2,180,697.288,A,0.42,29634.74,42.5,110,925 +2285,9/30/2009,2009,3,INDL,C,D,STD,BA,BOEING CO,USD,2009Q3,2009Q3,-1028,697.161,A,0.42,37751.2682,54.15,110,925 +2285,12/31/2009,2009,4,INDL,C,D,STD,BA,BOEING CO,USD,2009Q4,2009Q4,2128,726.291,A,0.42,39314.1318,54.13,110,925 +2285,3/31/2010,2010,1,INDL,C,D,STD,BA,BOEING CO,USD,2010Q1,2010Q1,2942,728.765,A,0.42,52915.6267,72.61,110,925 +2285,6/30/2010,2010,2,INDL,C,D,STD,BA,BOEING CO,USD,2010Q2,2010Q2,3077,731.19,A,0.42,45882.1725,62.75,110,925 +2285,9/30/2010,2010,3,INDL,C,D,STD,BA,BOEING CO,USD,2010Q3,2010Q3,4454,733.39,A,0.42,48799.7706,66.54,110,925 +2285,12/31/2010,2010,4,INDL,C,D,STD,BA,BOEING CO,USD,2010Q4,2010Q4,2766,735.259,A,0.42,47983.0023,65.26,110,925 +2285,3/31/2011,2011,1,INDL,C,D,STD,BA,BOEING CO,USD,2011Q1,2011Q1,3912,737.956,A,0.42,54557.0871,73.93,110,925 +2285,6/30/2011,2011,2,INDL,C,D,STD,BA,BOEING CO,USD,2011Q2,2011Q2,4733,740.571,A,0.42,54750.414,73.93,110,925 +2285,9/30/2011,2011,3,INDL,C,D,STD,BA,BOEING CO,USD,2011Q3,2011Q3,5967,742.883,A,0.42,44951.8503,60.51,110,925 +2285,12/31/2011,2011,4,INDL,C,D,STD,BA,BOEING CO,USD,2011Q4,2011Q4,3515,744.705,A,0.42,54624.1118,73.35,110,925 +2285,3/31/2012,2012,1,INDL,C,D,STD,BA,BOEING CO,USD,2012Q1,2012Q1,5027,748.726,A,0.44,55682.7526,74.37,110,925 +2285,6/30/2012,2012,2,INDL,C,D,STD,BA,BOEING CO,USD,2012Q2,2012Q2,5804,751.348,A,0.44,55825.1564,74.3,110,925 +2285,9/30/2012,2012,3,INDL,C,D,STD,BA,BOEING CO,USD,2012Q3,2012Q3,7588,753.729,A,0.44,52455.7698,69.595,110,925 +2285,12/31/2012,2012,4,INDL,C,D,STD,BA,BOEING CO,USD,2012Q4,2012Q4,5867,755.631,A,0.44,56944.3522,75.36,110,925 +2285,3/31/2013,2013,1,INDL,C,D,STD,BA,BOEING CO,USD,2013Q1,2013Q1,7462,758.321,A,0.485,65101.8579,85.85,110,925 +1408,3/31/2000,2000,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2000Q1,2000Q1,2628.9,158.7,A,0.23,,25,325,978 +1408,6/30/2000,2000,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2000Q2,2000Q2,2630.8,157.6,A,0.23,,23.0625,325,978 +1408,9/30/2000,2000,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2000Q3,2000Q3,2569.1,155.8,A,0.23,,26.5,325,978 +1408,12/31/2000,2000,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2000Q4,2000Q4,2126.7,153.509,A,0.24,,30,325,978 +1408,3/31/2001,2001,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2001Q1,2001Q1,2121.5,153.4,A,0.24,,34.4,325,978 +1408,6/30/2001,2001,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2001Q2,2001Q2,2139,152.3,A,0.24,,38.36,325,978 +1408,9/30/2001,2001,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2001Q3,2001Q3,2077.4,152.3,A,0.24,,33.5,325,978 +1408,12/31/2001,2001,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2001Q4,2001Q4,2094.1,147.998,A,0.25,,39.59,325,978 +1408,3/31/2002,2002,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2002Q1,2002Q1,2164.8,150.168,A,0.25,,49.37,325,978 +1408,6/30/2002,2002,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2002Q2,2002Q2,2372.8,150.083,A,0.25,,56,325,978 +1408,9/30/2002,2002,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2002Q3,2002Q3,2347.6,149.252,A,0.25,,47.29,325,978 +1408,12/31/2002,2002,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2002Q4,2002Q4,2305.3,146.991,A,0.27,,46.51,325,978 +1408,3/31/2003,2003,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2003Q1,2003Q1,2305.4,145.319,A,0.27,,42.87,325,978 +1408,6/30/2003,2003,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2003Q2,2003Q2,2446.2,145.241,A,0.27,,52.2,325,978 +1408,9/30/2003,2003,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2003Q3,2003Q3,2505.1,145.87,A,0.3,,56.75,325,978 +1408,12/31/2003,2003,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2003Q4,2003Q4,2712,146.265,A,0.3,,71.49,325,978 +1408,3/31/2004,2004,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2004Q1,2004Q1,2785.7,146.248,A,0.3,,76.63,325,978 +1408,6/30/2004,2004,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2004Q2,2004Q2,2781.9,144.036,A,0.3,,75.43,325,978 +1408,9/30/2004,2004,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2004Q3,2004Q3,2874.4,144.153,A,0.33,,74.09,325,978 +1408,12/31/2004,2004,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2004Q4,2004Q4,3123.6,144.285,A,0.33,,77.18,325,978 +1408,3/31/2005,2005,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2005Q1,2005Q1,3268,145.447,A,0.33,,80.63,325,978 +1408,6/30/2005,2005,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2005Q2,2005Q2,3422.9,145.979,A,0.33,,88.8,325,978 +1408,9/30/2005,2005,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2005Q3,2005Q3,3466.3,146.112,A,0.36,,81.33,325,978 +1408,12/31/2005,2005,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2005Q4,2005Q4,3639,146.29,A,0.36,,78.02,325,978 +1408,3/31/2006,2006,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2006Q1,2006Q1,3778.7,146.635,A,0.36,11823.1801,80.63,325,978 +1408,6/30/2006,2006,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2006Q2,2006Q2,4334.4,150.828,A,0.36,10710.2963,71.01,325,978 +1408,9/30/2006,2006,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2006Q3,2006Q3,4418.9,151.384,A,0.39,11370.4522,75.11,325,978 +1408,12/31/2006,2006,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2006Q4,2006Q4,4721.7,151.909,A,0.39,12971.5095,85.39,325,978 +1408,3/31/2007,2007,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2007Q1,2007Q1,4815.3,152.757,A,0.39,12040.3067,78.82,325,978 +1408,6/30/2007,2007,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2007Q2,2007Q2,5115.7,153.179,A,0.39,12617.3542,82.37,325,978 +1408,9/30/2007,2007,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2007Q3,2007Q3,5307.3,153.814,A,0.42,12534.3029,81.49,325,978 +1408,12/31/2007,2007,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2007Q4,2007Q4,5679.8,153.914,A,0.42,11137.217,72.36,325,978 +1408,3/31/2008,2008,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2008Q1,2008Q1,5877.8,154.1,A,0.42,10709.95,69.5,325,978 +1408,6/30/2008,2008,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2008Q2,2008Q2,5865.9,152.2,A,0.42,9498.802,62.41,325,978 +1408,9/30/2008,2008,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2008Q3,2008Q3,5648.5,149.9,A,0.44,8598.264,57.36,325,978 +1408,12/31/2008,2008,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2008Q4,2008Q4,4680.5,150.102,A,0.44,6196.2106,41.28,325,978 +1408,3/31/2009,2009,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2009Q1,2009Q1,4516,150.2,A,0.44,3687.41,24.55,325,978 +1408,6/30/2009,2009,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2009Q2,2009Q2,4885.2,150.2,A,0.19,5217.948,34.74,325,978 +1408,9/30/2009,2009,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2009Q3,2009Q3,5065.7,150.3,A,0.19,6459.894,42.98,325,978 +1408,12/31/2009,2009,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2009Q4,2009Q4,5087.2,150.453,A,0.19,6499.5696,43.2,325,978 +1408,3/31/2010,2010,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2010Q1,2010Q1,5219.3,152.4,A,0.19,7392.924,48.51,325,978 +1408,6/30/2010,2010,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2010Q2,2010Q2,5270.3,152.5,A,0.19,5974.95,39.18,325,978 +1408,9/30/2010,2010,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2010Q3,2010Q3,5538.8,152.7,A,0.19,7517.421,49.23,325,978 +1408,12/31/2010,2010,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2010Q4,2010Q4,5666.2,153.212,A,0.19,9231.023,60.25,325,978 +1408,3/31/2011,2011,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2011Q1,2011Q1,5879.7,154.1,A,0.19,9537.249,61.89,325,978 +1408,6/30/2011,2011,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2011Q2,2011Q2,6253.8,154.5,A,0.19,9852.465,63.77,325,978 +1408,9/30/2011,2011,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2011Q3,2011Q3,6310.1,155.1,A,0.19,8387.808,54.08,325,978 +1408,12/31/2011,2011,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2011Q4,2011Q4,4095,155.939,A,0.19,7988.755,51.23,325,978 +1408,3/31/2012,2012,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2012Q1,2012Q1,4310,157.597,A,0.205,9230.4563,58.57,325,978 +1408,6/30/2012,2012,2,INDL,C,D,STD,BEAM,BEAM INC,USD,2012Q2,2012Q2,4323.8,158.368,A,0.205,9896.4163,62.49,325,978 +1408,9/30/2012,2012,3,INDL,C,D,STD,BEAM,BEAM INC,USD,2012Q3,2012Q3,4471.8,158.9,A,0.205,9143.106,57.54,325,978 +1408,12/31/2012,2012,4,INDL,C,D,STD,BEAM,BEAM INC,USD,2012Q4,2012Q4,4612.1,160.12,A,0.205,9781.7308,61.09,325,978 +1408,3/31/2013,2013,1,INDL,C,D,STD,BEAM,BEAM INC,USD,2013Q1,2013Q1,4678.2,160.889,A,0.225,10222.8871,63.54,325,978 +1976,3/31/2000,2000,1,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2000Q1,2000Q1,3041.5,330.211,A,0.115,,30.25,395,935 +1976,6/30/2000,2000,2,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2000Q2,2000Q2,3048.6,330.708,A,0.115,,32,395,935 +1976,9/30/2000,2000,3,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2000Q3,2000Q3,3076,332.023,A,0.115,,37.125,395,935 +1976,12/31/2000,2000,4,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2000Q4,2000Q4,3046.7,333.7,A,0.115,,41.5625,395,935 +1976,3/31/2001,2001,1,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2001Q1,2001Q1,3087.4,335.647,A,0.115,,36.31,395,935 +1976,6/30/2001,2001,2,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2001Q2,2001Q2,3149.6,335.761,A,0.115,,33.5,395,935 +1976,9/30/2001,2001,3,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2001Q3,2001Q3,3265,335.875,A,0.115,,28.95,395,935 +1976,12/31/2001,2001,4,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2001Q4,2001Q4,3327.8,336,A,0.115,,36.47,395,935 +1976,3/31/2002,2002,1,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2002Q1,2002Q1,3354.2,337.3,A,0.115,,38.25,395,935 +1976,6/30/2002,2002,2,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2002Q2,2002Q2,3443.3,337.3,A,0.115,,33.29,395,935 +1976,9/30/2002,2002,3,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2002Q3,2002Q3,3449.8,336.1,A,0.115,,29.03,395,935 +1976,12/31/2002,2002,4,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2002Q4,2002Q4,3397.2,335.8,A,0.115,,32.19,395,935 +1976,3/31/2003,2003,1,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2003Q1,2003Q1,3425.2,336.6,A,0.115,,29.93,395,935 +1976,6/30/2003,2003,2,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2003Q2,2003Q2,3459.8,334.7,A,0.115,,33.57,395,935 +1976,9/30/2003,2003,3,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2003Q3,2003Q3,3341.7,334.8,A,0.115,,29.59,395,935 +1976,12/31/2003,2003,4,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2003Q4,2003Q4,3350.4,332,A,0.115,,32.16,395,935 +1976,3/31/2004,2004,1,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2004Q1,2004Q1,3415.8,332.8,A,0.115,,36.48,395,935 +1976,6/30/2004,2004,2,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2004Q2,2004Q2,3491.8,333.2,A,0.115,,37.65,395,935 +1976,9/30/2004,2004,3,INDL,C,D,STD,BHI,BAKER HUGHES INC,USD,2004Q3,2004Q3,3659.6,335.1,A,0.115,,43.72,395,935 [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2756 From noreply at r-forge.r-project.org Sat Aug 10 03:37:17 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 03:37:17 +0200 (CEST) Subject: [Returnanalytics-commits] r2757 - pkg/FactorAnalytics/vignettes Message-ID: <20130810013717.B1FC51833B9@r-forge.r-project.org> Author: chenyian Date: 2013-08-10 03:37:16 +0200 (Sat, 10 Aug 2013) New Revision: 2757 Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-10 01:36:03 UTC (rev 2756) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-10 01:37:16 UTC (rev 2757) @@ -41,7 +41,8 @@ We download data from CRSP/Compustat quarterly fundamental and name \verb at equity@ which contains 67 stocks from January 2000 to Decenmber 2013. <>= -equity <- data(equity) +#equity <- data(equity) +equity <- read.csv(file="equity.csv") names(equity) length(unique(equity$datadate)) # number of period t length(unique(equity$tic)) # number of assets From noreply at r-forge.r-project.org Sat Aug 10 04:45:46 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 04:45:46 +0200 (CEST) Subject: [Returnanalytics-commits] r2758 - pkg/PortfolioAnalytics/R Message-ID: <20130810024546.711D0184BCB@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-10 04:45:46 +0200 (Sat, 10 Aug 2013) New Revision: 2758 Modified: pkg/PortfolioAnalytics/R/extractstats.R Log: adding checks for extractStats functions Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-10 01:37:16 UTC (rev 2757) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-10 02:45:46 UTC (rev 2758) @@ -59,30 +59,34 @@ #' @seealso \code{\link{optimize.portfolio}} #' @export extractStats.optimize.portfolio.DEoptim <- function(object, prefix=NULL, ...) { - - # first pull out the optimal portfolio - trow<-c(unlist(object$objective_measures),out=object$out,object$weights) - #colnames(trow)<-c(colnames(unlist(object$objective_measures)),'out',names(object$weights)) - result<-trow - l = length(object$DEoptim_objective_results) - nobj<-length(unlist(object$DEoptim_objective_results[[1]]$objective_measures)) - result=matrix(nrow=l,ncol=(nobj+length(object$weights))+1) - ncols<-ncol(result) - - for (i in 1:l) { - if(!is.atomic(object$DEoptim_objective_results[[i]])) { - result[i,1:nobj]<-unlist(object$DEoptim_objective_results[[i]]$objective_measures) - result[i,(nobj+1)]<-object$DEoptim_objective_results[[i]]$out - result[i,(nobj+2):ncols]<-object$DEoptim_objective_results[[i]]$weights - } + if(!inherits(object, "optimize.portfolio.DEoptim")) stop("object must be of class optimize.portfolio.DEoptim") + + # Check if object$DEoptim_objective_results is null, the user called optimize.portfolio with trace=FALSE + if(is.null(object$DEoptim_objective_results)) stop("DEoptim_objective_results is null, trace=TRUE must be specified in optimize.portfolio") + + # first pull out the optimal portfolio + trow<-c(unlist(object$objective_measures),out=object$out,object$weights) + #colnames(trow)<-c(colnames(unlist(object$objective_measures)),'out',names(object$weights)) + result<-trow + l = length(object$DEoptim_objective_results) + nobj<-length(unlist(object$DEoptim_objective_results[[1]]$objective_measures)) + result=matrix(nrow=l,ncol=(nobj+length(object$weights))+1) + ncols<-ncol(result) + + for (i in 1:l) { + if(!is.atomic(object$DEoptim_objective_results[[i]])) { + result[i,1:nobj]<-unlist(object$DEoptim_objective_results[[i]]$objective_measures) + result[i,(nobj+1)]<-object$DEoptim_objective_results[[i]]$out + result[i,(nobj+2):ncols]<-object$DEoptim_objective_results[[i]]$weights } - - rnames<-c(names(unlist(object$DEoptim_objective_results[[1]]$objective_measures)),'out',paste('w',names(object$weights),sep='.')) - rnames<-name.replace(rnames) - colnames(result)<-rnames - rownames(result) = paste(prefix,"DE.portf", index(object$DEoptim_objective_results), sep=".") - #rownames(result) = paste("DE.portf.", index(result), sep="") - return(result) + } + + rnames<-c(names(unlist(object$DEoptim_objective_results[[1]]$objective_measures)),'out',paste('w',names(object$weights),sep='.')) + rnames<-name.replace(rnames) + colnames(result)<-rnames + rownames(result) = paste(prefix,"DE.portf", index(object$DEoptim_objective_results), sep=".") + #rownames(result) = paste("DE.portf.", index(result), sep="") + return(result) } @@ -125,9 +129,12 @@ #' \code{\link{extractStats}} #' @export extractStats.optimize.portfolio.random <- function(object, prefix=NULL, ...){ -# This just flattens the $random_portfolio_objective_results part of the -# object -# @TODO: add a class check for the input object +# This just flattens the $random_portfolio_objective_results part of the object + if(!inherits(object, "optimize.portfolio.random")) stop("object must be of class optimize.portfolio.random") + + # Check if object$random_portfolio_objective_results is null, the user called optimize.portfolio with trace=FALSE + if(is.null(object$random_portfolio_objective_results)) stop("random_portfolio_objective_results is null, trace=TRUE must be specified in optimize.portfolio") + OptimResults<-object l = length(OptimResults$random_portfolio_objective_results) @@ -223,7 +230,7 @@ #' @param ... any other passthru parameters #' @export extractStats.optimize.portfolio.ROI <- function(object, prefix=NULL, ...) { - + if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class optimize.portfolio.ROI") trow<-c(out=object$out, object$weights) result<-trow @@ -246,6 +253,9 @@ extractStats.optimize.portfolio.pso <- function(object, prefix=NULL, ...){ if(!inherits(object, "optimize.portfolio.pso")) stop("object must be of class optimize.portfolio.pso") + # Check if object$PSOoutput is null, the user called optimize.portfolio with trace=FALSE + if(is.null(object$PSOoutput)) stop("PSOoutput is null, trace=TRUE must be specified in optimize.portfolio") + normalize_weights <- function(weights){ # normalize results if necessary if(!is.null(constraints$min_sum) | !is.null(constraints$max_sum)){ @@ -310,7 +320,11 @@ #' @param ... any other passthru parameters #' @export extractStats.optimize.portfolio.GenSA <- function(object, prefix=NULL, ...) { + if(!inherits(object, "optimize.portfolio.GenSA")) stop("object must be of class optimize.portfolio.GenSA") + # Check if object$GenSAoutput is null, the user called optimize.portfolio with trace=FALSE + if(is.null(object$GenSAoutput)) stop("GenSAoutput is null, trace=TRUE must be specified in optimize.portfolio") + trow<-c(out=object$out, object$weights) obj <- unlist(object$objective_measures) result <- c(obj, trow) From noreply at r-forge.r-project.org Sat Aug 10 05:03:22 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 05:03:22 +0200 (CEST) Subject: [Returnanalytics-commits] r2759 - pkg/PortfolioAnalytics/R Message-ID: <20130810030322.ACB31184BCB@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-10 05:03:22 +0200 (Sat, 10 Aug 2013) New Revision: 2759 Modified: pkg/PortfolioAnalytics/R/extractstats.R Log: correcting names for output of extractStats.optimize.portfolio.GenSA Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-10 02:45:46 UTC (rev 2758) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-10 03:03:22 UTC (rev 2759) @@ -329,7 +329,7 @@ obj <- unlist(object$objective_measures) result <- c(obj, trow) - rnames<-c('out',paste('w',names(object$weights),sep='.')) - names(result)<-rnames + rnames <- name.replace(names(result)) + names(result) <- rnames return(result) } From noreply at r-forge.r-project.org Sat Aug 10 13:32:01 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 13:32:01 +0200 (CEST) Subject: [Returnanalytics-commits] r2760 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7: Code/Tests Vignette Message-ID: <20130810113201.2448118102B@r-forge.r-project.org> Author: shubhanm Date: 2013-08-10 13:32:00 +0200 (Sat, 10 Aug 2013) New Revision: 2760 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Tests/Tests.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Vignette/Test_Report.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Vignette/Test_Report.pdf Log: Week 6-7 : Tests for comparison of HAC functions for comparative matlab functions status: completing Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Tests/Tests.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Tests/Tests.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Tests/Tests.R 2013-08-10 11:32:00 UTC (rev 2760) @@ -0,0 +1,21 @@ +fpe <- read.table("http://data.princeton.edu/wws509/datasets/effort.dat") +attach(fpe) +lmfit = lm( change ~ setting + effort ) +sandwich(lmfit) +Fr <- c(68,42,42,30, 37,52,24,43, + 66,50,33,23, 47,55,23,47, + 63,53,29,27, 57,49,19,29) + +Temp <- gl(2, 2, 24, labels = c("Low", "High")) +Soft <- gl(3, 8, 24, labels = c("Hard","Medium","Soft")) +M.user <- gl(2, 4, 24, labels = c("N", "Y")) +Brand <- gl(2, 1, 24, labels = c("X", "M")) + +detg <- data.frame(Fr,Temp, Soft,M.user, Brand) +detg.m0 <- glm(Fr ~ M.user*Temp*Soft + Brand, family = poisson, data = detg) +summary(detg.m0) + +detg.mod <- glm(terms(Fr ~ M.user*Temp*Soft + Brand*M.user*Temp, + keep.order = TRUE), + family = poisson, data = detg) +sandwich(detg.mod) \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Vignette/Test_Report.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Vignette/Test_Report.Rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Vignette/Test_Report.Rnw 2013-08-10 11:32:00 UTC (rev 2760) @@ -0,0 +1,35 @@ +\documentclass{article} + +\begin{document} + +\SweaveOpts{concordance=TRUE} +<<>>= +library(sandwich) +fpe <- read.table("http://data.princeton.edu/wws509/datasets/effort.dat") +attach(fpe) +lmfit = lm( change ~ setting + effort ) +lmfit +sandwich(lmfit) +Fr <- c(68,42,42,30, 37,52,24,43, + 66,50,33,23, 47,55,23,47, + 63,53,29,27, 57,49,19,29) + +Temp <- gl(2, 2, 24, labels = c("Low", "High")) +Soft <- gl(3, 8, 24, labels = c("Hard","Medium","Soft")) +M.user <- gl(2, 4, 24, labels = c("N", "Y")) +Brand <- gl(2, 1, 24, labels = c("X", "M")) + +detg <- data.frame(Fr,Temp, Soft,M.user, Brand) +detg.m0 <- glm(Fr ~ M.user*Temp*Soft + Brand, family = poisson, data = detg) +detg.m0 +sandwich(detg.m0) +detg.mod <- glm(terms(Fr ~ M.user*Temp*Soft + Brand*M.user*Temp, + keep.order = TRUE), + family = poisson, data = detg) +sandwich(detg.mod) +@ + + + + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Vignette/Test_Report.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Vignette/Test_Report.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream From noreply at r-forge.r-project.org Sat Aug 10 14:07:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 14:07:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2761 - pkg/PerformanceAnalytics/sandbox/pulkit/week6 Message-ID: <20130810120735.BFB8A185A02@r-forge.r-project.org> Author: pulkit Date: 2013-08-10 14:07:34 +0200 (Sat, 10 Aug 2013) New Revision: 2761 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R Log: Changes in multidrawdown beta Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-10 11:32:00 UTC (rev 2760) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R 2013-08-10 12:07:34 UTC (rev 2761) @@ -35,7 +35,7 @@ #' September 2012 -CdarMultiPath<-function (R,ps,sample,instr, geometric = TRUE,p = 0.95, ...) +CdarMultiPath<-function (R,ps,sample, geometric = TRUE,p = 0.95, ...) { #p = .setalphaprob(p) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R 2013-08-10 11:32:00 UTC (rev 2760) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R 2013-08-10 12:07:34 UTC (rev 2761) @@ -35,10 +35,10 @@ #'of Florida,September 2012. #' #'@examples -#' +#'MultiBetaDrawdown(cbind(edhec,edhec),cbind(edhec[,2],edhec[,2]),sample = 2,ps=c(0.4,0.6)) #'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 -MultiBetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ +MultiBetaDrawdown<-function(R,Rm,sample,ps,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ # DESCRIPTION: # @@ -58,6 +58,7 @@ columns = ncol(R) drawdowns_m = Drawdowns(Rm) type = type[1] + nr = nrow(Rm) if(type=="average"){ p = 0 } @@ -109,8 +110,7 @@ f.rhs = c(f.rhs,rep(1/(1-p)*nr,sample*nr)) val = lp("max",f.obj,f.con,f.dir,f.rhs) - q = matrix(val$solutions,ncol = sample) - + q = matrix(val$solution,ncol = sample) # TODO INCORPORATE WEIGHTS if(geometric){ @@ -123,37 +123,42 @@ multiDDbeta<-function(x){ boolean = (cummax(cumul_xm)==cumul_xm) index = NULL - for(j in 1:nrow(Rm)){ - if(boolean[j] == TRUE){ - index = c(index,j) - b = j - } - else{ - index = c(index,b) - } + for(i in 1:sample){ + for(j in 1:nrow(Rm)){ + if(boolean[j,i] == TRUE){ + index = c(index,j) + b = j + } + else{ + index = c(index,b) + } + } } + index = matrix(index,ncol = sample) + beta_dd = 0 for(i in 1:sample){ - for(j in 1:nrow(x)){ - - beta_dd = (p[i]*q[j,i]*(x[index,i]-x[,i]))/CDaR(Rm,p=p) + beta_dd = beta_dd + sum(ps[i]*q[,i]*(as.numeric(x[index[,i],i])-x[,i])) + } + beta_dd = beta_dd/CdarMultiPath(Rm,ps=ps,p=p,sample = sample) return(beta_dd) } - result = matrix(nrow = 1, ncol = ncol(R)/sample) + result = NULL for (i in 1:(ncol(R)/sample)) { ret<-NULL for(j in 1:sample){ ret<-cbind(ret,R[,(j-1)*ncol(R)/sample+i]) } - result[i] <- multiDDbeta(ret) + result <-c(result, multiDDbeta(ret)) } - - dim(result) = c(1, NCOL(R)/sample) - colnames(result) = colnames(R)[1:ncol(R)/sample] - rownames(result) = paste("Conditional Drawdown ", - + result = matrix(result,nrow = 1) + colnames(result) = colnames(R)[1:(ncol(R)/sample)] + #colnames(result) = colnames(R)[1:ncol(R)/sample] + rownames(result) = paste("Conditional Drawdown","(",p*100,"%)",sep="") + return(result) + } -} + From noreply at r-forge.r-project.org Sat Aug 10 16:15:12 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 10 Aug 2013 16:15:12 +0200 (CEST) Subject: [Returnanalytics-commits] r2762 - in pkg/PortfolioAnalytics: R man Message-ID: <20130810141512.988BA18544E@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-10 16:15:12 +0200 (Sat, 10 Aug 2013) New Revision: 2762 Modified: pkg/PortfolioAnalytics/R/applyFUN.R pkg/PortfolioAnalytics/R/charts.RP.R pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd Log: Modified applyFUN to accept a single set of weights in addition to a matrix of weights. Modified chart.Scatter.RP to calculate risk or return metric that is not included in the objective_measures or extractStats output. Modified: pkg/PortfolioAnalytics/R/applyFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-10 12:07:34 UTC (rev 2761) +++ pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-10 14:15:12 UTC (rev 2762) @@ -53,17 +53,31 @@ } ) # end switch block - out <- rep(0, nrow(weights)) - .formals <- formals(fun) - onames <- names(.formals) - for(i in 1:nrow(weights)){ - nargs$weights <- as.numeric(weights[i,]) - nargs$x <- R %*% as.numeric(weights[i,]) + if(!is.null(nrow(weights))){ + # case for matrix of weights + out <- rep(0, nrow(weights)) + .formals <- formals(fun) + onames <- names(.formals) + for(i in 1:nrow(weights)){ + nargs$weights <- as.numeric(weights[i,]) + nargs$x <- R %*% as.numeric(weights[i,]) + dargs <- nargs + pm <- pmatch(names(dargs), onames, nomatch = 0L) + names(dargs[pm > 0L]) <- onames[pm] + .formals[pm] <- dargs[pm > 0L] + out[i] <- try(do.call(fun, .formals)) + } + } else { + # case for single vector of weights + .formals <- formals(fun) + onames <- names(.formals) + nargs$weights <- as.numeric(weights) + nargs$x <- R %*% as.numeric(weights) dargs <- nargs pm <- pmatch(names(dargs), onames, nomatch = 0L) names(dargs[pm > 0L]) <- onames[pm] .formals[pm] <- dargs[pm > 0L] - out[i] <- try(do.call(fun, .formals)) + out <- try(do.call(fun, .formals)) } - return(out) + return(out) } \ No newline at end of file Modified: pkg/PortfolioAnalytics/R/charts.RP.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-10 12:07:34 UTC (rev 2761) +++ pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-10 14:15:12 UTC (rev 2762) @@ -95,6 +95,9 @@ #' classic risk return scatter of random portfolios #' #' @param RP set of portfolios created by \code{\link{optimize.portfolio}} +#' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo +#' object of asset returns, used to recalulate the objective function when +#' return.col or risk.col is not part of the extractStats output. #' @param neighbors set of 'neighbor' portfolios to overplot, see Details #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis @@ -103,7 +106,7 @@ #' @param element.color color for the default plot scatter points #' @seealso \code{\link{optimize.portfolio}} #' @export -chart.Scatter.RP <- function(RP, neighbors = NULL, return.col='mean', risk.col='ES', ..., element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.RP <- function(RP, R=NULL, neighbors = NULL, return.col='mean', risk.col='ES', ..., element.color = "darkgray", cex.axis=0.8){ # more or less specific to the output of the random portfolio code with constraints # will work to a point with other functions, such as optimize.porfolio.parallel # there's still a lot to do to improve this. @@ -122,10 +125,45 @@ risk.column = pmatch(risk.col,columnnames) } - if(is.na(return.column) | is.na(risk.column)) stop(return.col,' or ',risk.col, ' do not match extractStats output') + # if(is.na(return.column) | is.na(risk.column)) stop(return.col,' or ',risk.col, ' do not match extractStats output') + # If the user has passed in return.col or risk.col that does not match extractStats output + # This will give the flexibility of passing in return or risk metrics that are not + # objective measures in the optimization. This may cause issues with the "neighbors" + # functionality since that is based on the "out" column + if(is.na(return.column) | is.na(risk.column)){ + return.col <- gsub("\\..*", "", return.col) + risk.col <- gsub("\\..*", "", risk.col) + warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') + # Get the matrix of weights for applyFUN + wts_index <- grep("w.", columnnames) + wts <- xtract[, wts_index] + if(is.na(return.column)){ + tmpret <- applyFUN(R=R, weights=wts, FUN=return.col) + xtract <- cbind(tmpret, xtract) + colnames(xtract)[which(colnames(xtract) == "tmpret")] <- return.col + } + if(is.na(risk.column)){ + tmprisk <- applyFUN(R=R, weights=wts, FUN=risk.col) + xtract <- cbind(tmprisk, xtract) + colnames(xtract)[which(colnames(xtract) == "tmprisk")] <- risk.col + } + columnnames = colnames(xtract) + return.column = pmatch(return.col,columnnames) + if(is.na(return.column)) { + return.col = paste(return.col,return.col,sep='.') + return.column = pmatch(return.col,columnnames) + } + risk.column = pmatch(risk.col,columnnames) + if(is.na(risk.column)) { + risk.col = paste(risk.col,risk.col,sep='.') + risk.column = pmatch(risk.col,columnnames) + } + } + # print(colnames(head(xtract))) + plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) - + if(!is.null(neighbors)){ if(is.vector(neighbors)){ if(length(neighbors)==1){ @@ -178,8 +216,19 @@ risk.col = paste(risk.col,risk.col,sep='.') risk.column = pmatch(risk.col,names(objcols)) } - if(is.na(return.column) | is.na(risk.column)) warning(return.col,' or ',risk.col, ' do not match extractStats output of $objective_measures slot') - points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal + # risk and return metrics for the optimal weights if the RP object does not + # contain the metrics specified by return.col or risk.col + if(is.na(return.column) | is.na(risk.column)){ + return.col <- gsub("\\..*", "", return.col) + risk.col <- gsub("\\..*", "", risk.col) + # warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') + opt_weights <- RP$weights + ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) + risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) + points(risk, ret, col="blue", pch=16) #optimal + } else { + points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal + } axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) Modified: pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd 2013-08-10 12:07:34 UTC (rev 2761) +++ pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd 2013-08-10 14:15:12 UTC (rev 2762) @@ -2,7 +2,7 @@ \alias{chart.Scatter.RP} \title{classic risk return scatter of random portfolios} \usage{ - chart.Scatter.RP(RP, neighbors = NULL, + chart.Scatter.RP(RP, R = NULL, neighbors = NULL, return.col = "mean", risk.col = "ES", ..., element.color = "darkgray", cex.axis = 0.8) } @@ -10,6 +10,11 @@ \item{RP}{set of portfolios created by \code{\link{optimize.portfolio}}} + \item{R}{an optional an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns, used to + recalulate the objective function when return.col or + risk.col is not part of the extractStats output.} + \item{neighbors}{set of 'neighbor' portfolios to overplot, see Details} From noreply at r-forge.r-project.org Sun Aug 11 04:34:25 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 11 Aug 2013 04:34:25 +0200 (CEST) Subject: [Returnanalytics-commits] r2763 - in pkg/PortfolioAnalytics: R man Message-ID: <20130811023426.44409185819@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-11 04:34:20 +0200 (Sun, 11 Aug 2013) New Revision: 2763 Modified: pkg/PortfolioAnalytics/R/charts.RP.R pkg/PortfolioAnalytics/man/charts.RP.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd Log: modifying chart methods for optimize.portfolio.random objects for chart.Scatter.RP to plot other risk or return metrics not specified in objective measures Modified: pkg/PortfolioAnalytics/R/charts.RP.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-10 14:15:12 UTC (rev 2762) +++ pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-11 02:34:20 UTC (rev 2763) @@ -246,6 +246,7 @@ #' \code{risk.col},\code{return.col}, and weights columns all properly named. #' #' @param RP set of random portfolios created by \code{\link{optimize.portfolio}} +#' @param R an optional xts, vector, matrix, data frame, timeSeries or zoo #' @param ... any other passthru parameters #' @param risk.col string name of column to use for risk (horizontal axis) #' @param return.col string name of column to use for returns (vertical axis) @@ -255,14 +256,14 @@ #' \code{\link{optimize.portfolio}} #' \code{\link{extractStats}} #' @export -charts.RP <- function(RP, risk.col, return.col, +charts.RP <- function(RP, R=NULL, risk.col, return.col, neighbors=NULL, main="Random.Portfolios", ...){ # Specific to the output of the random portfolio code with constraints # @TODO: check that RP is of the correct class op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.RP(RP, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) + chart.Scatter.RP(RP, R=R, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) par(mar=c(2,4,0,2)) chart.Weights.RP(RP, main="", neighbors=neighbors, ...) par(op) @@ -284,13 +285,14 @@ #' \code{risk.col},\code{return.col}, and weights columns all properly named. #' @param x set of portfolios created by \code{\link{optimize.portfolio}} #' @param ... any other passthru parameters +#' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo #' @param risk.col string name of column to use for risk (horizontal axis) #' @param return.col string name of column to use for returns (vertical axis) #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio.random <- function(x, ..., return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { - charts.RP(RP=x, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) +plot.optimize.portfolio.random <- function(x, ..., R=NULL, return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { + charts.RP(RP=x, R=R, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) } #' plot method for optimize.portfolio output Modified: pkg/PortfolioAnalytics/man/charts.RP.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.RP.Rd 2013-08-10 14:15:12 UTC (rev 2762) +++ pkg/PortfolioAnalytics/man/charts.RP.Rd 2013-08-11 02:34:20 UTC (rev 2763) @@ -2,13 +2,16 @@ \alias{charts.RP} \title{scatter and weights chart for random portfolios} \usage{ - charts.RP(RP, risk.col, return.col, neighbors = NULL, - main = "Random.Portfolios", ...) + charts.RP(RP, R = NULL, risk.col, return.col, + neighbors = NULL, main = "Random.Portfolios", ...) } \arguments{ \item{RP}{set of random portfolios created by \code{\link{optimize.portfolio}}} + \item{R}{an optional xts, vector, matrix, data frame, + timeSeries or zoo} + \item{...}{any other passthru parameters} \item{risk.col}{string name of column to use for risk Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd 2013-08-10 14:15:12 UTC (rev 2762) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd 2013-08-11 02:34:20 UTC (rev 2763) @@ -2,7 +2,7 @@ \alias{plot.optimize.portfolio.random} \title{plot method for optimize.portfolio.random output} \usage{ - plot.optimize.portfolio.random(x, ..., + plot.optimize.portfolio.random(x, ..., R = NULL, return.col = "mean", risk.col = "ES", neighbors = NULL, main = "optimized portfolio plot") } @@ -12,6 +12,9 @@ \item{...}{any other passthru parameters} + \item{R}{an optional an xts, vector, matrix, data frame, + timeSeries or zoo} + \item{risk.col}{string name of column to use for risk (horizontal axis)} From noreply at r-forge.r-project.org Sun Aug 11 05:05:40 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 11 Aug 2013 05:05:40 +0200 (CEST) Subject: [Returnanalytics-commits] r2764 - in pkg/PortfolioAnalytics: R man Message-ID: <20130811030541.114AD185706@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-11 05:05:33 +0200 (Sun, 11 Aug 2013) New Revision: 2764 Modified: pkg/PortfolioAnalytics/R/charts.DE.R pkg/PortfolioAnalytics/man/charts.DE.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd Log: modifying charting methods for optimize.portfolio.DEoptim objects to plot other return or risk metrics not included in objective measures. Modified: pkg/PortfolioAnalytics/R/charts.DE.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-11 02:34:20 UTC (rev 2763) +++ pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-11 03:05:33 UTC (rev 2764) @@ -121,8 +121,43 @@ risk.column = pmatch(risk.col,columnnames) } - if(is.na(return.column) | is.na(risk.column)) stop(return.col,' or ',risk.col, ' do not match extractStats output') + # if(is.na(return.column) | is.na(risk.column)) stop(return.col,' or ',risk.col, ' do not match extractStats output') + # If the user has passed in return.col or risk.col that does not match extractStats output + # This will give the flexibility of passing in return or risk metrics that are not + # objective measures in the optimization. This may cause issues with the "neighbors" + # functionality since that is based on the "out" column + if(is.na(return.column) | is.na(risk.column)){ + return.col <- gsub("\\..*", "", return.col) + risk.col <- gsub("\\..*", "", risk.col) + warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') + # Get the matrix of weights for applyFUN + wts_index <- grep("w.", columnnames) + wts <- xtract[, wts_index] + if(is.na(return.column)){ + tmpret <- applyFUN(R=R, weights=wts, FUN=return.col) + xtract <- cbind(tmpret, xtract) + colnames(xtract)[which(colnames(xtract) == "tmpret")] <- return.col + } + if(is.na(risk.column)){ + tmprisk <- applyFUN(R=R, weights=wts, FUN=risk.col) + xtract <- cbind(tmprisk, xtract) + colnames(xtract)[which(colnames(xtract) == "tmprisk")] <- risk.col + } + columnnames = colnames(xtract) + return.column = pmatch(return.col,columnnames) + if(is.na(return.column)) { + return.col = paste(return.col,return.col,sep='.') + return.column = pmatch(return.col,columnnames) + } + risk.column = pmatch(risk.col,columnnames) + if(is.na(risk.column)) { + risk.col = paste(risk.col,risk.col,sep='.') + risk.column = pmatch(risk.col,columnnames) + } + } + # print(colnames(head(xtract))) + plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) if(!is.null(neighbors)){ @@ -220,8 +255,19 @@ risk.col = paste(risk.col,risk.col,sep='.') risk.column = pmatch(risk.col,names(objcols)) } - if(is.na(return.column) | is.na(risk.column)) warning(return.col,' or ',risk.col, ' do not match extractStats output of $objective_measures slot') - points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal + # risk and return metrics for the optimal weights if the RP object does not + # contain the metrics specified by return.col or risk.col + if(is.na(return.column) | is.na(risk.column)){ + return.col <- gsub("\\..*", "", return.col) + risk.col <- gsub("\\..*", "", risk.col) + # warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') + opt_weights <- DE$weights + ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) + risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) + points(risk, ret, col="blue", pch=16) #optimal + } else { + points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal + } axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) @@ -239,6 +285,7 @@ #' \code{risk.col},\code{return.col}, and weights columns all properly named. #' #' @param DE set of random portfolios created by \code{\link{optimize.portfolio}} +#' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the objective function where required #' @param ... any other passthru parameters #' @param risk.col string name of column to use for risk (horizontal axis) #' @param return.col string name of column to use for returns (vertical axis) @@ -248,13 +295,13 @@ #' \code{\link{optimize.portfolio}} #' \code{\link{extractStats}} #' @export -charts.DE <- function(DE, risk.col, return.col, neighbors=NULL, main="DEoptim.Portfolios", ...){ +charts.DE <- function(DE, R=NULL, risk.col, return.col, neighbors=NULL, main="DEoptim.Portfolios", ...){ # Specific to the output of the random portfolio code with constraints # @TODO: check that DE is of the correct class op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.DE(DE, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) + chart.Scatter.DE(DE, R=R, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) par(mar=c(2,4,0,2)) chart.Weights.DE(DE, main="", neighbors=neighbors, ...) par(op) @@ -276,11 +323,12 @@ #' \code{risk.col},\code{return.col}, and weights columns all properly named. #' @param x set of portfolios created by \code{\link{optimize.portfolio}} #' @param ... any other passthru parameters +#' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the objective function where required #' @param risk.col string name of column to use for risk (horizontal axis) #' @param return.col string name of column to use for returns (vertical axis) #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio.DEoptim <- function(x, ..., return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { - charts.DE(DE=x, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) +plot.optimize.portfolio.DEoptim <- function(x, ..., R=NULL, return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { + charts.DE(DE=x, R=R, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) } \ No newline at end of file Modified: pkg/PortfolioAnalytics/man/charts.DE.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.DE.Rd 2013-08-11 02:34:20 UTC (rev 2763) +++ pkg/PortfolioAnalytics/man/charts.DE.Rd 2013-08-11 03:05:33 UTC (rev 2764) @@ -2,13 +2,17 @@ \alias{charts.DE} \title{scatter and weights chart for random portfolios} \usage{ - charts.DE(DE, risk.col, return.col, neighbors = NULL, - main = "DEoptim.Portfolios", ...) + charts.DE(DE, R = NULL, risk.col, return.col, + neighbors = NULL, main = "DEoptim.Portfolios", ...) } \arguments{ \item{DE}{set of random portfolios created by \code{\link{optimize.portfolio}}} + \item{R}{an optional an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns, used to + recalulate the objective function where required} + \item{...}{any other passthru parameters} \item{risk.col}{string name of column to use for risk Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd 2013-08-11 02:34:20 UTC (rev 2763) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd 2013-08-11 03:05:33 UTC (rev 2764) @@ -2,7 +2,7 @@ \alias{plot.optimize.portfolio.DEoptim} \title{plot method for optimize.portfolio.DEoptim output} \usage{ - plot.optimize.portfolio.DEoptim(x, ..., + plot.optimize.portfolio.DEoptim(x, ..., R = NULL, return.col = "mean", risk.col = "ES", neighbors = NULL, main = "optimized portfolio plot") } @@ -12,6 +12,10 @@ \item{...}{any other passthru parameters} + \item{R}{an optional an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns, used to + recalulate the objective function where required} + \item{risk.col}{string name of column to use for risk (horizontal axis)} From noreply at r-forge.r-project.org Sun Aug 11 05:07:42 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 11 Aug 2013 05:07:42 +0200 (CEST) Subject: [Returnanalytics-commits] r2765 - in pkg/PortfolioAnalytics: R man Message-ID: <20130811030743.2C22C185706@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-11 05:07:41 +0200 (Sun, 11 Aug 2013) New Revision: 2765 Modified: pkg/PortfolioAnalytics/R/charts.RP.R pkg/PortfolioAnalytics/man/charts.RP.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd Log: updating documentation for charts.RP Modified: pkg/PortfolioAnalytics/R/charts.RP.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-11 03:05:33 UTC (rev 2764) +++ pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-11 03:07:41 UTC (rev 2765) @@ -246,7 +246,9 @@ #' \code{risk.col},\code{return.col}, and weights columns all properly named. #' #' @param RP set of random portfolios created by \code{\link{optimize.portfolio}} -#' @param R an optional xts, vector, matrix, data frame, timeSeries or zoo +#' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo +#' object of asset returns, used to recalulate the objective function when +#' return.col or risk.col is not part of the extractStats output. #' @param ... any other passthru parameters #' @param risk.col string name of column to use for risk (horizontal axis) #' @param return.col string name of column to use for returns (vertical axis) @@ -286,6 +288,8 @@ #' @param x set of portfolios created by \code{\link{optimize.portfolio}} #' @param ... any other passthru parameters #' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo +#' object of asset returns, used to recalulate the objective function when +#' return.col or risk.col is not part of the extractStats output. #' @param risk.col string name of column to use for risk (horizontal axis) #' @param return.col string name of column to use for returns (vertical axis) #' @param neighbors set of 'neighbor portfolios to overplot Modified: pkg/PortfolioAnalytics/man/charts.RP.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.RP.Rd 2013-08-11 03:05:33 UTC (rev 2764) +++ pkg/PortfolioAnalytics/man/charts.RP.Rd 2013-08-11 03:07:41 UTC (rev 2765) @@ -9,8 +9,10 @@ \item{RP}{set of random portfolios created by \code{\link{optimize.portfolio}}} - \item{R}{an optional xts, vector, matrix, data frame, - timeSeries or zoo} + \item{R}{an optional an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns, used to + recalulate the objective function when return.col or + risk.col is not part of the extractStats output.} \item{...}{any other passthru parameters} Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd 2013-08-11 03:05:33 UTC (rev 2764) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd 2013-08-11 03:07:41 UTC (rev 2765) @@ -13,7 +13,9 @@ \item{...}{any other passthru parameters} \item{R}{an optional an xts, vector, matrix, data frame, - timeSeries or zoo} + timeSeries or zoo object of asset returns, used to + recalulate the objective function when return.col or + risk.col is not part of the extractStats output.} \item{risk.col}{string name of column to use for risk (horizontal axis)} From noreply at r-forge.r-project.org Sun Aug 11 14:23:24 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 11 Aug 2013 14:23:24 +0200 (CEST) Subject: [Returnanalytics-commits] r2766 - pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code Message-ID: <20130811122324.77208185816@r-forge.r-project.org> Author: shubhanm Date: 2013-08-11 14:23:23 +0200 (Sun, 11 Aug 2013) New Revision: 2766 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/effort.dat pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/nwse.m Log: Week 6-7 : HAC Newey-west code and data for testing Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/effort.dat =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/effort.dat (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/effort.dat 2013-08-11 12:23:23 UTC (rev 2766) @@ -0,0 +1,23 @@ + setting effort change + Bolivia 46 0 1 + Brazil 74 0 10 + Chile 89 16 29 + Colombia 77 16 25 + CostaRica 84 21 29 + Cuba 89 15 40 + DominicanRep 68 14 21 + Ecuador 70 6 0 + ElSalvador 60 13 13 + Guatemala 55 9 4 + Haiti 35 3 0 + Honduras 51 7 7 + Jamaica 87 23 21 + Mexico 83 4 9 + Nicaragua 68 0 7 + Panama 84 19 22 + Paraguay 74 3 6 + Peru 73 0 2 + TrinidadTobago 84 15 29 + Venezuela 91 7 11 + + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/nwse.m =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/nwse.m (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week6-7/Code/Equivalent Matlab Code/nwse.m 2013-08-11 12:23:23 UTC (rev 2766) @@ -0,0 +1,42 @@ +function [V,S]=nwse(e,X,nlags) +% PURPOSE: computes Newey-West adjusted heteroscedastic-serial +% consistent standard errors (only se's) +%--------------------------------------------------- +% USAGE: [V,S] = nwse(e,X,nlag) +% where: e = T x n vector of model residuls +% X = T x k matrix of independ vars +% nlags = lag length to use +%--------------------------------------------------- +% RETURNS: +% V is the Newey-West Var-Cov matrix +% S is the spectral density of u = e.*X +% -------------------------------------------------- + +% written by: Mike Cliff, Purdue Finance, mcliff at mgmt.purdue.edu +% CREATED 11/17/00 +% MODIFIED 1/23/01 Input e, X separtely; return V, S; df adjustment +% 2/20/01 Allow for system of eqs (multiple e vectors) + +if (nargin ~= 3); error('Wrong # of arguments to nwse'); end; + +[T,k] = size(X); +n = cols(e); +S = zeros(n*k,n*k); +if k == 1 & X == ones(T,1) + u = e; +else + u = []; + for i = 1:cols(e) + u = [u repmat(e(:,i),1,k).*X]; + end +end + +for lag = 0:nlags + rho = u(1:T-lag,:)'*u(1+lag:T,:)/(T-k); + if lag >= 1, rho = rho + rho'; end + wt = 1 - lag/(nlags+1); + S = S + wt*rho; +end + +V = kron(eye(n),(X'*X/T)\eye(k)); +V = V*S*V/T; \ No newline at end of file From noreply at r-forge.r-project.org Mon Aug 12 05:14:39 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 12 Aug 2013 05:14:39 +0200 (CEST) Subject: [Returnanalytics-commits] r2767 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130812031439.467DB183FD8@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-12 05:14:38 +0200 (Mon, 12 Aug 2013) New Revision: 2767 Added: pkg/PortfolioAnalytics/man/return_constraint.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/constrained_objective.R pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/R/optimize.portfolio.R Log: adding functionality to specify target return as a constraint per conversations with Doug. Target return can now be specified both as an objective, just as it has always been, and as a constraint. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-11 12:23:23 UTC (rev 2766) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-12 03:14:38 UTC (rev 2767) @@ -82,6 +82,7 @@ export(randomize_portfolio_v1) export(randomize_portfolio_v2) export(randomize_portfolio) +export(return_constraint) export(return_objective) export(risk_budget_objective) export(rp_transform) Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-11 12:23:23 UTC (rev 2766) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-12 03:14:38 UTC (rev 2767) @@ -518,6 +518,14 @@ out = out + penalty * mult * abs(to - turnover_target) } } # End turnover constraint penalty + + # penalize weights that violate return target constraint + if(!is.null(constraints$return_target)){ + return_target <- constraints$return_target + mean_return <- mean(R %*% w) + mult <- 1 + out = out + penalty * mult * abs(mean_return - return_target) + } # End return constraint penalty nargs <- list(...) if(length(nargs)==0) nargs <- NULL Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-11 12:23:23 UTC (rev 2766) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-12 03:14:38 UTC (rev 2767) @@ -272,6 +272,12 @@ message=message, ...=...) }, + # Return constraint + return = {tmp_constraint <- return_constraint(type=type, + enabled=enabled, + message=message, + ...=...) + }, # Do nothing and return the portfolio object if type is NULL null = {return(portfolio)} ) @@ -604,6 +610,9 @@ out$max_pos_long <- constraint$max_pos_long out$max_pos_short <- constraint$max_pos_short } + if(inherits(constraint, "return_constraint")){ + out$return_target <- constraint$return_target + } } } @@ -681,6 +690,30 @@ return(Constraint) } +#' constructor for return_constraint +#' +#' This function is called by add.constraint when type="return" is specified, \code{\link{add.constraint}} +#' +#' @param type character type of the constraint +#' @param return_target return target value +#' @param enabled TRUE/FALSE +#' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. +#' @param \dots any other passthru parameters +#' @author Ross Bennett +#' @examples +#' data(edhec) +#' ret <- edhec[, 1:4] +#' +#' pspec <- portfolio.spec(assets=colnames(ret)) +#' +#' pspec <- add.constraint(portfolio=pspec, type="return", div_target=mean(colMeans(ret))) +#' @export +return_constraint <- function(type="return", return_target, enabled=TRUE, message=FALSE, ...){ + Constraint <- constraint_v2(type, enabled=enabled, constrclass="return_constraint", ...) + Constraint$return_target <- return_target + return(Constraint) +} + #' constructor for position_limit_constraint #' #' This function is called by add.constraint when type="position_limit" is specified, \code{\link{add.constraint}} Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-11 12:23:23 UTC (rev 2766) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-12 03:14:38 UTC (rev 2767) @@ -775,7 +775,11 @@ # we can either miniminze variance or maximize quiadratic utility (we will be minimizing the neg. quad. utility) moments <- list(mean=rep(0, N)) alpha <- 0.05 - target <- NA + if(!is.null(constraints$return_target)){ + target <- constraints$return_target + } else { + target <- NA + } lambda <- 1 for(objective in portfolio$objectives){ if(objective$enabled){ Added: pkg/PortfolioAnalytics/man/return_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/return_constraint.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/return_constraint.Rd 2013-08-12 03:14:38 UTC (rev 2767) @@ -0,0 +1,35 @@ +\name{return_constraint} +\alias{return_constraint} +\title{constructor for return_constraint} +\usage{ + return_constraint(type = "return", return_target, + enabled = TRUE, message = FALSE, ...) +} +\arguments{ + \item{type}{character type of the constraint} + + \item{return_target}{return target value} + + \item{enabled}{TRUE/FALSE} + + \item{message}{TRUE/FALSE. The default is message=FALSE. + Display messages if TRUE.} + + \item{\dots}{any other passthru parameters} +} +\description{ + This function is called by add.constraint when + type="return" is specified, \code{\link{add.constraint}} +} +\examples{ +data(edhec) +ret <- edhec[, 1:4] + +pspec <- portfolio.spec(assets=colnames(ret)) + +pspec <- add.constraint(portfolio=pspec, type="return", div_target=mean(colMeans(ret))) +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Mon Aug 12 23:09:34 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 12 Aug 2013 23:09:34 +0200 (CEST) Subject: [Returnanalytics-commits] r2768 - in pkg/FactorAnalytics: R vignettes Message-ID: <20130812210934.E205B185959@r-forge.r-project.org> Author: chenyian Date: 2013-08-12 23:09:34 +0200 (Mon, 12 Aug 2013) New Revision: 2768 Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: update vignettes Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-12 03:14:38 UTC (rev 2767) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-12 21:09:34 UTC (rev 2768) @@ -140,7 +140,7 @@ assets = unique(data[[assetvar]]) timedates = as.Date(unique(data[[datevar]])) -# data[[datevar]] <- as.Date(data[[datevar]]) + data[[datevar]] <- as.Date(data[[datevar]]) if (length(timedates) < 2) stop("At least two time points, t and t-1, are needed for fitting the factor model.") @@ -192,9 +192,7 @@ } } - - - + regression.formula <- paste("~", paste(exposure.names, collapse = "+")) # "~ BOOK2MARKET" if (length(exposures.factor)) { @@ -338,8 +336,7 @@ } # if there is industry dummy variables if (length(exposures.factor)) { - numCoefs <- length(exposures.numeric) + length(levels(data[, - exposures.factor])) + numCoefs <- length(exposures.numeric) + length(levels(data[,exposures.factor])) ncols <- 1 + 2 * numCoefs + numAssets fnames <- c(exposures.numeric, paste(exposures.factor, levels(data[, exposures.factor]), sep = "")) @@ -355,8 +352,7 @@ # create matrix for fit FE.hat.mat <- matrix(NA, ncol = ncols, nrow = numTimePoints, - dimnames = list(as.character(as.Date(as.numeric(names(FE.hat)), origin = "1970-01-01")), - cnames)) + dimnames = list(as.character(timedates),cnames)) # give each element t names for (i in 1:length(FE.hat)) { names(FE.hat[[i]])[1] <- "numCoefs" @@ -370,7 +366,7 @@ FE.hat.mat[i, idx] <- FE.hat[[i]] } # give back the names of timedates - timedates <- as.Date(as.numeric(dimnames(FE.hat)[[1]]), origin = "1970-01-01") +# timedates <- as.Date(as.numeric(dimnames(FE.hat)[[1]]), origin = "1970-01-01") coefs.names <- colnames(FE.hat.mat)[2:(1 + numCoefs)] # estimated factors returns ordered by time f.hat <- xts(x = FE.hat.mat[, 2:(1 + numCoefs)], order.by = timedates) @@ -414,16 +410,17 @@ B.final[, match("(Intercept)", colnames, 0)] <- 1 numeric.columns <- match(exposures.numeric, colnames, 0) # only take the latest beta to compute FM covariance - # should we let user choose which beta to use ? B.final[, numeric.columns] <- as.matrix(data[ (as.numeric(data[[datevar]]) == timedates[numTimePoints]), exposures.numeric]) rownames(B.final) = assets colnames(B.final) = colnames(f.hat) + if (length(exposures.factor)) { B.final[, grep(exposures.factor, x = colnames)][cbind(seq(numAssets), - as.numeric(data[data[[datevar]] == timedates[numTimePoints], + as.numeric(data[ data[[datevar]] == timedates[numTimePoints], exposures.factor]))] <- 1 } + cov.returns <- B.final %*% Cov.factors$cov %*% t(B.final) + if (full.resid.cov) { D.hat$cov } else { D.hat } @@ -436,7 +433,7 @@ Cov.resids <- D.hat } else { - Cov.resids <- NULL + Cov.resids <- diag(resid.vars) } # # # r-square for each asset = 1 - SSE/SST Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-12 03:14:38 UTC (rev 2767) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-12 21:09:34 UTC (rev 2768) @@ -15,8 +15,8 @@ \subsection{Fundamental Factor Model} A factor model is defined as \\ -\begin{equation} \label{fm} - r_t = bf + \epsilon_t\;,t=1 \cdots T +\begin{equation} + r_t = bf + \epsilon_t\;,t=1 \cdots T \label{fm} \end{equation} Where $r_t$ is N x 1, b is N x K and f is K x 1. N is number of variables and K is number of factors. b is usually called factor exposures or factor loadings and f is factor returns. $\epsilon_t$ is serial uncorrelated but may be cross-correlated. The model is useful to fit for examples asset returns. The famous CAPM (Capital Assets Pricing Model) is a one factor model with f equal to market returns. @@ -26,19 +26,19 @@ \end{equation} $f_M$ is normally called market factor or world factor depending on the context on the country level or global level. Econometrically, it is an intercept term of fundamental factor model. $f_t$ is estimated with cross-sectional in each period t. -This approach is also called BARRA type approach since it is initially deceloped by BARRA and later on been merged by MSCI. The famous Barra global equity model (GEM3) contains more than 50 factors. +This approach is also called BARRA type approach since it is initially developed by BARRA and later on been merged by MSCI. The famous Barra global equity model (GEM3) contains more than 50 factors. -\section{Example} -We will walk through some examples in this section. First example will use style factors like size and then we industry/country dummies. +\section{Example 1} +We will walk through the first examples in this section. We will use style factors like size. \subsection{Loading Data} Let's look at the arguments of \verb at fitFundamentalFactorModel()@ which will deal with fundamental factor model in \verb at factorAnalytics@. <>= library(factorAnalytics) args(fitFundamentalFactorModel) @ -\verb at data@ is in class of \verb at data.frame@ and is required to have \emph{assetvar},\emph{returnvar} and \emph{datevar}. One can image data is like panel data setup and need firm variable and time variable. So data has dimension (N x T) and at least 3 colnumes to specify information needed. +\verb at data@ is in class of \verb at data.frame@ and is required to have \emph{assetvar},\emph{returnvar} and \emph{datevar}. One can image data is like panel data setup and need firm variable and time variable. So data has dimension (N x T) and at least 3 consumes to specify information needed. -We download data from CRSP/Compustat quarterly fundamental and name \verb at equity@ which contains 67 stocks from January 2000 to Decenmber 2013. +We download data from CRSP/Compustat quarterly fundamental and name \verb at equity@ which contains 67 stocks from January 2000 to December 2013. <>= #equity <- data(equity) @@ -54,14 +54,14 @@ function(x) Delt(x$PRCCQ)))) names(equity)[22] <- "RET" @ -We want market value and book-to-market ratio too. market vale can be achieved by commom stocks outstading x price and book value we use commom/ordinary equity value. +We want market value and book-to-market ratio too. market vale can be achieved by common stocks outstanding x price and book value we use common/ordinary equity value. We also take log on market value. <>== -equity$MV <- equity$PRCCQ*equity$CSHOQ +equity$MV <- log(equity$PRCCQ*equity$CSHOQ) equity$BM <- equity$CEQQ/equity$MV @ now we use model \ref{ffm} where K=2, b = [ MV , BM ]. -We will get an error message if \verb at datevar@ is not \verb at as.Date@ format compatible. In our example, our date variable is \emph{DATACQTR} and looks like "2000Q1". We have to convert it to \verb at as.Date@ compatible. We can utilize \verb at as.yearqtr@ to do it. Aslo, we will use character string for asset variable instead of factor. +We will get an error message if \verb at datevar@ is not \verb at as.Date@ format compatible. In our example, our date variable is \emph{DATACQTR} and looks like "2000Q1". We have to convert it to \verb at as.Date@ compatible. We can utilize \verb at as.yearqtr@ to do it. Also, we will use character string for asset variable instead of factor. <>= a <- unlist( lapply(strsplit(as.character(equity$DATACQTR),"Q"), function(x) paste(x[[1]],"-",x[[2]],sep="") ) ) @@ -71,6 +71,7 @@ # delete the first element of each assets @ +\subsection{Fit the Model} fit the function: <>= fit.fund <- fitFundamentalFactorModel(exposure.names=c("BM","MV"),datevar="yearqtr", @@ -78,13 +79,86 @@ names(fit.fund) @ +A few notice for fitting fundamental factor model. So far this function can only deal with balanced panel because we want to extract return covariance and residuals and so on. Second, \verb at datevar@ has to be \verb at as.Date@ compatible, otherwise the function can not read the time index. It is somehow inconvenient but make sure we will not mess up with time index. +Default fit method for \verb at fitFundamentalFactorModel()@ is classic OLS and covariance matrix is also classic covariance matrix defined by \verb at covClassic()@ in \verb at robust@ package. One can change to robust estimation and robust covariance matrix estimation. +\verb at returns.cov@ contains information about returns covariance. return covariance is +\[ \Sigma_x = B \Sigma_f B' + D \]. If \verb at full.resid.cov@ is \emph{FALSE}, D is diagonal matrix with variance of residuals in diagonal terms. If \emph{TRUE}, D is covariance matrix of residuals. +<>= +names(fit.fund$returns.cov) +@ +Once can check out \verb at fit.fund$factor.cov@, \verb at fit.fund$resids.cov@ and \verb at fit.fund$resid.variance@ for detail. +factor returns, residuals,t-stats are xts class. +<>= +fit.fund$factor.returns +fit.fund$residuals +fit.fund$tstats +@ +There are a few generic function \verb at predict@, \verb at summary@, \verb at print@ and \verb at plot@ one can utilize. +<>= +summary(fit.fund) +predict(fit.fund) +print(fit.fund) +@ +If \emph{newdata} is not specified in \verb at predict()@, fitted value of fundamental factor model will be shown, otherwise, predicted value will be shown. +\verb at plot()@ method has several option to choose, +\begin{verbatim} +> plot(fit.fund) +Factor Analytic Plot +Make a plot selection (or 0 to exit): + + +1: Factor returns +2: Residual plots +3: Variance of Residuals +4: Factor Model Correlation +5: Factor Contributions to SD +6: Factor Contributions to ES +7: Factor Contributions to VaR + +Selection: plot(fit.fund) +Enter an item from the menu, or 0 to exit +\end{verbatim} + +For example, choose 1 will give factor returns and it looks like +<>= +plot(fit.fund,which.plot=1,max.show=3) +@ + +\begin{figure} +\begin{center} +<>= +<> +@ +\end{center} +\caption{Time Series of factor returns} +\label{fig1} +\end{figure} + +\section{Example 2: Barra type industry/country model} +In a global equity model or specific country equity model, modelers usually want to use industry/country dummies. In our example, we have 63 stocks in different industry. In specific, +\begin{equation} +x_{it} = a_{i,t} + \Sigma_{j=1}^{J}b_{i,j}f_{i,t} + \epsilon_{i,t},\;for\,each\,i\,,t +\end{equation} +where $b_{i,j} = 1$ if stock i in industry j and $b_{i,j}=0$ otherwise. +In matrix form:\[ x_t = Bf_t + \epsilon_t \] and B is the N X J matrix of industry dummies. + +\emph{SPCINDCD} in our data are $S\&P$ industry code, what we only have to do to fit industry model is to add this variable name into \verb at exposure.names@. Be sure this variable is \emph{character} not \emph{numeric}. Otherwise the function will not create dummies. + +<>= +equity$SPCINDCD <- as.character(equity$SPCINDCD) +fit.ind <- fitFundamentalFactorModel(exposure.names=c("SPCINDCD"),datevar="yearqtr", + returnsvar ="RET",assetvar="tic",wls=FALSE,data=equity) +@ +One can also use generic function to do plot, summary... + + \end{document} \ No newline at end of file From noreply at r-forge.r-project.org Mon Aug 12 23:44:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 12 Aug 2013 23:44:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2769 - in pkg/FactorAnalytics: R vignettes Message-ID: <20130812214410.E87E018474E@r-forge.r-project.org> Author: chenyian Date: 2013-08-12 23:44:10 +0200 (Mon, 12 Aug 2013) New Revision: 2769 Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: edit vignettes/fundamentalFM.Rnw Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-12 21:09:34 UTC (rev 2768) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-12 21:44:10 UTC (rev 2769) @@ -352,7 +352,7 @@ # create matrix for fit FE.hat.mat <- matrix(NA, ncol = ncols, nrow = numTimePoints, - dimnames = list(as.character(timedates),cnames)) + dimnames = list(as.character(timedates), cnames)) # give each element t names for (i in 1:length(FE.hat)) { names(FE.hat[[i]])[1] <- "numCoefs" @@ -410,14 +410,13 @@ B.final[, match("(Intercept)", colnames, 0)] <- 1 numeric.columns <- match(exposures.numeric, colnames, 0) # only take the latest beta to compute FM covariance - B.final[, numeric.columns] <- as.matrix(data[ (as.numeric(data[[datevar]]) == - timedates[numTimePoints]), exposures.numeric]) + B.final[, numeric.columns] <- as.matrix(data[ (data[[datevar]] == timedates[numTimePoints]), exposures.numeric]) rownames(B.final) = assets colnames(B.final) = colnames(f.hat) if (length(exposures.factor)) { B.final[, grep(exposures.factor, x = colnames)][cbind(seq(numAssets), - as.numeric(data[ data[[datevar]] == timedates[numTimePoints], + (data[ data[[datevar]] == timedates[numTimePoints], exposures.factor]))] <- 1 } Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-12 21:09:34 UTC (rev 2768) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-12 21:44:10 UTC (rev 2769) @@ -6,7 +6,7 @@ \begin{document} \SweaveOpts{concordance=TRUE} -\title{factorAnalytics: fundamental factor model} +\title{factorAnalytics: Fundamental Factor Model} \author{Yi-An Chen} \maketitle @@ -160,5 +160,31 @@ @ One can also use generic function to do plot, summary... +\verb at fitFundamentalFactorModel()@ support industry/country dummy factor exposures and style factor exposures together. Try +<>= +fit.mix <- fitFundamentalFactorModel(exposure.names=c("BM","MV","SPCINDCD"), + datevar="yearqtr",returnsvar ="RET", + assetvar="tic",wls=FALSE,data=equity) +@ + +\section{Standardizing Factor Exposure} +It is common to standardize factor exposure to have weight mean 0 and standard deviation equal to 1. The weight are often taken as proportional to square root of market capitalization, although other weighting schemes are possible. + +We will try example 1 but with stardarized factor exposure with square root of market capitalization. First we create a weighting variable. + +<>= +equity$weight <- sqrt(exp(equity$MV)) # we take log for MV before. +@ +We can choose \verb at standardized.factor.exposure@ to be \verb at TRUE@ and \verb at weight.var@ equal to weighting variabel. +<>= +fit.fund2 <- fitFundamentalFactorModel(exposure.names=c("BM","MV"), + datevar="yearqtr",returnsvar ="RET", + assetvar="tic",wls=TRUE,data=equity, + standardized.factor.exposure = TRUE, + weight.var = "weight" ) +@ + + + \end{document} \ No newline at end of file From noreply at r-forge.r-project.org Tue Aug 13 00:38:57 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 13 Aug 2013 00:38:57 +0200 (CEST) Subject: [Returnanalytics-commits] r2770 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: Week1/Code Week4/Code Message-ID: <20130812223857.4FB16185077@r-forge.r-project.org> Author: shubhanm Date: 2013-08-13 00:38:56 +0200 (Tue, 13 Aug 2013) New Revision: 2770 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week4/Code/AcarSim.R Log: Reoxygenization, modification(Week 4, 3) as well as addtion of CDD Optimization Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/Return.Okunev.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/Return.Okunev.R 2013-08-12 21:44:10 UTC (rev 2769) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/Return.Okunev.R 2013-08-12 22:38:56 UTC (rev 2770) @@ -1,3 +1,25 @@ +#' +#'The objective is to determine the true underlying return by removing the +#' autocorrelation structure in the original return series without making any assumptions +#' regarding the actual time series properties of the underlying process. We are +#' implicitly assuming by this approach that the autocorrelations that arise in reported +#'returns are entirely due to the smoothing behavior funds engage in when reporting +#' results. In fact, the method may be adopted to produce any desired +#' level of autocorrelation at any lag and is not limited to simply eliminating all +#'autocorrelations.It can be be said as the general form of Geltner Return Model +#' +#' @references "Hedge Fund Risk Factors and Value at Risk of Credit +#' Trading Strategies , John Okunev & Derek White +#' +#' @keywords ts multivariate distribution models +#' @examples +#' +#' data(managers) +#' head(Return.Okunev(managers[,1:3]),n=3) +#' +#' +#' @export + Return.Okunev<-function(R,q=3) { column.okunev=R @@ -9,7 +31,7 @@ } return(c(column.okunev)) } - +#' Recusrsive Okunev Call Function quad <- function(R,d) { coeff = as.numeric(acf(as.numeric(edhec[,1]), plot = FALSE)[1:2][[1]]) @@ -19,3 +41,15 @@ #a <- a[!is.na(a)] return(c(ans)) } +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: Return.Okunev.R 2163 2012-07-16 00:30:19Z braverock $ +# +############################################################################### + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week4/Code/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week4/Code/AcarSim.R 2013-08-12 21:44:10 UTC (rev 2769) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week4/Code/AcarSim.R 2013-08-12 22:38:56 UTC (rev 2770) @@ -5,6 +5,8 @@ #' We have simulated cash flows over a period of 36 monthly returns and measured maximum #'drawdown for varied levels of annualised return divided by volatility varying from minus #' two to two by step of 0.1. The process has been repeated six thousand times. +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns #' @author R Project #' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} #' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 @@ -15,8 +17,11 @@ #' @rdname Cdrawdown #' @export AcarSim <- - function() + function(R) { + R = checkData(Ra, method="xts") + # Get dimensions and labels + # simulated parameters using edhec data mu=mean(Return.annualized(edhec)) monthly=(1+mu)^(1/12)-1 sig=StdDev.annualized(edhec[,1])[1]; @@ -68,7 +73,7 @@ lty = c(2, -1, 1), pch = c(-1, 3, 4), merge = TRUE, bg='gray90') title("Maximum Drawdown/Volatility as a function of Return/Volatility -36 monthly returns simulated 6,000 time") +36 monthly returns simulated 6,000 times") } ############################################################################### From noreply at r-forge.r-project.org Tue Aug 13 00:39:54 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 13 Aug 2013 00:39:54 +0200 (CEST) Subject: [Returnanalytics-commits] r2771 - pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code Message-ID: <20130812223954.61A51185077@r-forge.r-project.org> Author: shubhanm Date: 2013-08-13 00:39:54 +0200 (Tue, 13 Aug 2013) New Revision: 2771 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDDopt.R Log: Conditional Draw down Optimization Code Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDDopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDDopt.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week5/Code/CDDopt.R 2013-08-12 22:39:54 UTC (rev 2771) @@ -0,0 +1,24 @@ +cDDOpt = function(rmat, alpha=0.05, rmin=0, wmin=0, wmax=1, weight.sum=1) +{ + require(Rglpk) + n = ncol(rmat) # number of assets + s = nrow(rmat) # number of scenarios i.e. periods + averet = colMeans(rmat) + # creat objective vector, constraint matrix, constraint rhs + Amat = rbind(cbind(rbind(1,averet),matrix(data=0,nrow=2,ncol=s+1)), + cbind(rmat,diag(s),1)) + objL = c(rep(0,n), as.numeric(Cdrawdown(rmat,.9)), -1) + bvec = c(weight.sum,rmin,rep(0,s)) + # direction vector + dir.vec = c("==",">=",rep(">=",s)) + # bounds on weights + bounds = list(lower = list(ind = 1:n, val = rep(wmin,n)), + upper = list(ind = 1:n, val = rep(wmax,n))) + res = Rglpk_solve_LP(obj=objL, mat=Amat, dir=dir.vec, rhs=bvec, + types=rep("C",length(objL)), max=T, bounds=bounds) + w = as.numeric(res$solution[1:n]) + return(list(w=w,status=res$status)) +} +#' Guy Yollin work +#' +#' From noreply at r-forge.r-project.org Tue Aug 13 01:42:56 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 13 Aug 2013 01:42:56 +0200 (CEST) Subject: [Returnanalytics-commits] r2772 - in pkg/PortfolioAnalytics: demo sandbox Message-ID: <20130812234256.F007C185BE7@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-13 01:42:56 +0200 (Tue, 13 Aug 2013) New Revision: 2772 Added: pkg/PortfolioAnalytics/sandbox/testing_return_target.R Modified: pkg/PortfolioAnalytics/demo/constrained_optim.R pkg/PortfolioAnalytics/demo/sortino.R pkg/PortfolioAnalytics/demo/testing_GenSA.R pkg/PortfolioAnalytics/demo/testing_ROI.R pkg/PortfolioAnalytics/demo/testing_pso.R Log: fixing the files in the demo folder to run using v1 specification Modified: pkg/PortfolioAnalytics/demo/constrained_optim.R =================================================================== --- pkg/PortfolioAnalytics/demo/constrained_optim.R 2013-08-12 22:39:54 UTC (rev 2771) +++ pkg/PortfolioAnalytics/demo/constrained_optim.R 2013-08-12 23:42:56 UTC (rev 2772) @@ -2,10 +2,11 @@ require("PortfolioAnalytics") require("DEoptim") data(edhec) +pspec <- portfolio.spec(assets=colnames(edhec[, 1:10])) constraints=constraint(assets = colnames(edhec[, 1:10]), min = 0.01, max = 0.4, min_sum=1, max_sum=1, weight_seq = generatesequence()) # note that if you wanted to do a random portfolio optimization, mun_sum of .99 and max_sum of 1.01 might be more appropriate -constraints<-add.objective(constraints, type="return", name="mean", arguments=list(), enabled=TRUE) -constraints<-add.objective(constraints, type="risk_budget", name="ES", arguments=list(), enabled=TRUE, p=.95, min_prisk=.05, max_prisk=.15) +constraints<-add.objective_v1(constraints, type="return", name="mean", arguments=list(), enabled=TRUE) +constraints<-add.objective_v1(constraints, type="risk_budget", name="ES", arguments=list(), enabled=TRUE, p=.95, min_prisk=.05, max_prisk=.15) #constraints #now set some additional bits # I should have set the multiplier for returns to negative @@ -17,14 +18,14 @@ print("We'll use a search_size parameter of 1000 for this demo, but realistic portfolios will likely require search_size parameters much larger, the default is 20000 which is almost always large enough for any realistic portfolio and constraints, but will take substantially longer to run.") # look for a solution using both DEoptim and random portfolios -opt_out<-optimize.portfolio(R=edhec[,1:10], constraints, optimize_method="DEoptim", search_size=1000, trace=TRUE) +opt_out<-optimize.portfolio_v1(R=edhec[,1:10], constraints=constraints, optimize_method="DEoptim", search_size=1000, trace=TRUE) #we need a little more wiggle in min/max sum for random portfolios or it takes too long to converge constraints$min_sum<-.99 constraints$max_sum<-1.01 -opt_out_random<-optimize.portfolio(R=edhec[,1:10], constraints, optimize_method="random", search_size=1000, trace=TRUE) +opt_out_random<-optimize.portfolio_v1(R=edhec[,1:10], constraints=constraints, optimize_method="random", search_size=1000, trace=TRUE) # now lets try a portfolio that rebalances quarterly -opt_out_rebalancing<-optimize.portfolio.rebalancing(R=edhec[,1:10], constraints, optimize_method="DEoptim", search_size=1000, trace=FALSE,rebalance_on='quarters') +opt_out_rebalancing<-optimize.portfolio.rebalancing_v1(R=edhec[,1:10], constraints, optimize_method="DEoptim", search_size=1000, trace=FALSE,rebalance_on='quarters') rebalancing_weights<-matrix(nrow=length(opt_out_rebalancing),ncol=length(opt_out_rebalancing[[1]]]$weights)) rownames(rebalancing_weights)<-names(opt_out_rebalancing) colnames(rebalancing_weights)<-names(opt_out_rebalancing[[1]]$weights) @@ -33,4 +34,4 @@ charts.PerformanceSummary(rebalancing_returns) # and now lets rebalance quarterly with 48 mo trailing -opt_out_trailing<-optimize.portfolio.rebalancing(R=edhec[,1:10], constraints, optimize_method="DEoptim", search_size=1000, trace=FALSE,rebalance_on='quarters',trailing_periods=48,training_period=48) \ No newline at end of file +opt_out_trailing<-optimize.portfolio.rebalancing_v1(R=edhec[,1:10], constraints, optimize_method="DEoptim", search_size=1000, trace=FALSE,rebalance_on='quarters',trailing_periods=48,training_period=48) \ No newline at end of file Modified: pkg/PortfolioAnalytics/demo/sortino.R =================================================================== --- pkg/PortfolioAnalytics/demo/sortino.R 2013-08-12 22:39:54 UTC (rev 2771) +++ pkg/PortfolioAnalytics/demo/sortino.R 2013-08-12 23:42:56 UTC (rev 2772) @@ -32,11 +32,11 @@ #'# Example 1 maximize Sortino Ratio SortinoConstr <- constraint(assets = colnames(indexes[,1:4]), min = 0.05, max = 1, min_sum=.99, max_sum=1.01, weight_seq = generatesequence(by=.001)) -SortinoConstr <- add.objective(SortinoConstr, type="return", name="SortinoRatio", enabled=TRUE, arguments = list(MAR=MAR)) -SortinoConstr <- add.objective(SortinoConstr, type="return", name="mean", enabled=TRUE, multiplier=0) # multiplier 0 makes it availble for plotting, but not affect optimization +SortinoConstr <- add.objective_v1(SortinoConstr, type="return", name="SortinoRatio", enabled=TRUE, arguments = list(MAR=MAR)) +SortinoConstr <- add.objective_v1(SortinoConstr, type="return", name="mean", enabled=TRUE, multiplier=0) # multiplier 0 makes it availble for plotting, but not affect optimization ### Use random portfolio engine -SortinoResult<-optimize.portfolio(R=indexes[,1:4], constraints=SortinoConstr, optimize_method='random', search_size=2000, trace=TRUE, verbose=TRUE) +SortinoResult<-optimize.portfolio_v1(R=indexes[,1:4], constraints=SortinoConstr, optimize_method='random', search_size=2000, trace=TRUE, verbose=TRUE) plot(SortinoResult, risk.col='SortinoRatio') ### alternately, Use DEoptim engine @@ -44,7 +44,7 @@ #plot(SortinoResultDE, risk.col='SortinoRatio') ### now rebalance quarterly -SortinoRebalance <- optimize.portfolio.rebalancing(R=indexes[,1:4], constraints=SortinoConstr, optimize_method="random", trace=TRUE, rebalance_on='quarters', trailing_periods=NULL, training_period=36, search_size=2000) +SortinoRebalance <- optimize.portfolio.rebalancing_v1(R=indexes[,1:4], constraints=SortinoConstr, optimize_method="random", trace=TRUE, rebalance_on='quarters', trailing_periods=NULL, training_period=36, search_size=2000) ############################################################################### # R (http://r-project.org/) Numeric Methods for Optimization of Portfolios Modified: pkg/PortfolioAnalytics/demo/testing_GenSA.R =================================================================== --- pkg/PortfolioAnalytics/demo/testing_GenSA.R 2013-08-12 22:39:54 UTC (rev 2771) +++ pkg/PortfolioAnalytics/demo/testing_GenSA.R 2013-08-12 23:42:56 UTC (rev 2772) @@ -23,10 +23,10 @@ mu.port <- mean(colMeans(R)) gen.constr <- constraint(assets = funds, min=-2, max=2, min_sum=0.99, max_sum=1.01, risk_aversion=1) -gen.constr <- add.objective(constraints=gen.constr, type="return", name="mean", enabled=FALSE, target=mu.port) -gen.constr <- add.objective(constraints=gen.constr, type="risk", name="var", enabled=FALSE, risk_aversion=10) -gen.constr <- add.objective(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE) -gen.constr <- add.objective(constraints=gen.constr, type="risk", name="sd", enabled=FALSE) +gen.constr <- add.objective_v1(constraints=gen.constr, type="return", name="mean", enabled=FALSE, target=mu.port) +gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="var", enabled=FALSE, risk_aversion=10) +gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE) +gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="sd", enabled=FALSE) # ===================== @@ -37,14 +37,14 @@ max.port$objectives[[1]]$enabled <- TRUE max.port$objectives[[1]]$target <- NULL max.port$objectives[[1]]$multiplier <- -1 -max.solution <- optimize.portfolio(R, max.port, "GenSA", trace=TRUE) +max.solution <- optimize.portfolio_v1(R, max.port, "GenSA", trace=TRUE) # ===================== # Mean-variance: Fully invested, Global Minimum Variance Portfolio gmv.port <- gen.constr gmv.port$objectives[[4]]$enabled <- TRUE -gmv.solution <- optimize.portfolio(R, gmv.port, "GenSA", trace=TRUE) +gmv.solution <- optimize.portfolio_v1(R, gmv.port, "GenSA", trace=TRUE) @@ -56,7 +56,7 @@ cvar.port$max <- rep(1,N) cvar.port$objectives[[3]]$enabled <- TRUE cvar.port$objectives[[3]]$arguments <- list(p=0.95, clean="boudt") -cvar.solution <- optimize.portfolio(R, cvar.port, "pso") +cvar.solution <- optimize.portfolio_v1(R, cvar.port, "GenSA") Modified: pkg/PortfolioAnalytics/demo/testing_ROI.R =================================================================== --- pkg/PortfolioAnalytics/demo/testing_ROI.R 2013-08-12 22:39:54 UTC (rev 2771) +++ pkg/PortfolioAnalytics/demo/testing_ROI.R 2013-08-12 23:42:56 UTC (rev 2772) @@ -20,9 +20,9 @@ N <- length(funds) gen.constr <- constraint(assets = colnames(edhec), min=-Inf, max =Inf, min_sum=1, max_sum=1, risk_aversion=1) -gen.constr <- add.objective(constraints=gen.constr, type="return", name="mean", enabled=FALSE, multiplier=0, target=mu.port) -gen.constr <- add.objective(constraints=gen.constr, type="risk", name="var", enabled=FALSE, multiplier=0, risk_aversion=10) -gen.constr <- add.objective(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE, multiplier=0) +gen.constr <- add.objective_v1(constraints=gen.constr, type="return", name="mean", enabled=FALSE, multiplier=0, target=mu.port) +gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="var", enabled=FALSE, multiplier=0, risk_aversion=10) +gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE, multiplier=0) # ===================== @@ -33,7 +33,7 @@ max.port$max <- rep(0.30,N) max.port$objectives[[1]]$enabled <- TRUE max.port$objectives[[1]]$target <- NULL -max.solution <- optimize.portfolio(edhec, max.port, "ROI") +max.solution <- optimize.portfolio_v1(edhec, max.port, "ROI") # ===================== @@ -42,7 +42,7 @@ gmv.port <- gen.constr gmv.port$objectives[[2]]$enabled <- TRUE gmv.port$objectives[[2]]$risk_aversion <- 1 -gmv.solution <- optimize.portfolio(edhec, gmv.port, "ROI") +gmv.solution <- optimize.portfolio_v1(edhec, gmv.port, "ROI") # ======================== @@ -51,7 +51,7 @@ target.port <- gen.constr target.port$objectives[[1]]$enabled <- TRUE target.port$objectives[[2]]$enabled <- TRUE -target.solution <- optimize.portfolio(edhec, target.port, "ROI") +target.solution <- optimize.portfolio_v1(edhec, target.port, "ROI") # ======================== @@ -62,7 +62,7 @@ dollar.neu.port$max_sum <- 0 dollar.neu.port$objectives[[1]]$enabled <- TRUE dollar.neu.port$objectives[[2]]$enabled <- TRUE -dollar.neu.solution <- optimize.portfolio(edhec, dollar.neu.port, "ROI") +dollar.neu.solution <- optimize.portfolio_v1(edhec, dollar.neu.port, "ROI") # ======================== @@ -71,7 +71,7 @@ cvar.port <- gen.constr cvar.port$objectives[[1]]$enabled <- TRUE cvar.port$objectives[[3]]$enabled <- TRUE -cvar.solution <- optimize.portfolio(edhec, cvar.port, "ROI") +cvar.solution <- optimize.portfolio_v1(edhec, cvar.port, "ROI") # ===================== @@ -84,7 +84,7 @@ groups.port$cUP <- rep(0.30,length(groups)) groups.port$objectives[[2]]$enabled <- TRUE groups.port$objectives[[2]]$risk_aversion <- 1 -groups.solution <- optimize.portfolio(edhec, groups.port, "ROI") +groups.solution <- optimize.portfolio_v1(edhec, groups.port, "ROI") # ======================== @@ -97,5 +97,5 @@ group.cvar.port$cUP <- rep(0.30,length(groups)) group.cvar.port$objectives[[1]]$enabled <- TRUE group.cvar.port$objectives[[3]]$enabled <- TRUE -group.cvar.solution <- optimize.portfolio(edhec, group.cvar.port, "ROI") +group.cvar.solution <- optimize.portfolio_v1(edhec, group.cvar.port, "ROI") Modified: pkg/PortfolioAnalytics/demo/testing_pso.R =================================================================== --- pkg/PortfolioAnalytics/demo/testing_pso.R 2013-08-12 22:39:54 UTC (rev 2771) +++ pkg/PortfolioAnalytics/demo/testing_pso.R 2013-08-12 23:42:56 UTC (rev 2772) @@ -22,10 +22,10 @@ mu.port <- mean(colMeans(R)) gen.constr <- constraint(assets = funds, min=-2, max=2, min_sum=0.99, max_sum=1.01, risk_aversion=1) -gen.constr <- add.objective(constraints=gen.constr, type="return", name="mean", enabled=FALSE, target=mu.port) -gen.constr <- add.objective(constraints=gen.constr, type="risk", name="var", enabled=FALSE, risk_aversion=10) -gen.constr <- add.objective(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE) -gen.constr <- add.objective(constraints=gen.constr, type="risk", name="sd", enabled=FALSE) +gen.constr <- add.objective_v1(constraints=gen.constr, type="return", name="mean", enabled=FALSE, target=mu.port) +gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="var", enabled=FALSE, risk_aversion=10) +gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE) +gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="sd", enabled=FALSE) # ===================== @@ -36,14 +36,14 @@ max.port$objectives[[1]]$enabled <- TRUE max.port$objectives[[1]]$target <- NULL max.port$objectives[[1]]$multiplier <- -1 -max.solution <- optimize.portfolio(R, max.port, "pso", trace=TRUE) +max.solution <- optimize.portfolio_v1(R, max.port, "pso", trace=TRUE) # ===================== # Mean-variance: Fully invested, Global Minimum Variance Portfolio gmv.port <- gen.constr gmv.port$objectives[[4]]$enabled <- TRUE -gmv.solution <- optimize.portfolio(R, gmv.port, "pso", trace=TRUE) +gmv.solution <- optimize.portfolio_v1(R, gmv.port, "pso", trace=TRUE) @@ -55,7 +55,7 @@ cvar.port$max <- rep(1,N) cvar.port$objectives[[3]]$enabled <- TRUE cvar.port$objectives[[3]]$arguments <- list(p=0.95, clean="boudt") -cvar.solution <- optimize.portfolio(R, cvar.port, "pso", trace=TRUE) +cvar.solution <- optimize.portfolio_v1(R, cvar.port, "pso", trace=TRUE) Added: pkg/PortfolioAnalytics/sandbox/testing_return_target.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_return_target.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_return_target.R 2013-08-12 23:42:56 UTC (rev 2772) @@ -0,0 +1,53 @@ +library(PortfolioAnalytics) +library(ROI) +require(ROI.plugin.glpk) +require(ROI.plugin.quadprog) + +data(edhec) +ret <- edhec[, 1:4] + +pspec1 <- portfolio.spec(assets=colnames(ret)) +pspec1 <- add.constraint(portfolio=pspec1, type="leverage", min_sum=1, max_sum=1) +pspec1 <- add.constraint(portfolio=pspec1, type="box") +pspec1 <- add.objective(portfolio=pspec1, type="return", name="mean", target=0.007) + +opt1 <- optimize.portfolio(R=ret, portfolio=pspec1, optimize_method="ROI") +opt1 +summary(opt1) +wts1 <- extractWeights(opt1) + +mean(ret %*% wts1) +colMeans(ret) %*% wts1 + +pspec2 <- portfolio.spec(assets=colnames(ret)) +pspec2 <- add.constraint(portfolio=pspec2, type="leverage", min_sum=1, max_sum=1) +pspec2 <- add.constraint(portfolio=pspec2, type="box") +pspec2 <- add.constraint(portfolio=pspec2, type="return", return_target=0.007) +pspec2 <- add.objective(portfolio=pspec2, type="return", name="mean") + +opt2 <- optimize.portfolio(R=ret, portfolio=pspec2, optimize_method="ROI") +opt2 +summary(opt2) +wts2 <- extractWeights(opt2) + +mean(ret %*% wts2) +colMeans(ret) %*% wts2 +all.equal(wts1, wts2) + +set.seed(123) +opt_de1 <- optimize.portfolio(R=ret, portfolio=pspec1, optimize_method="DEoptim", search_size=4000, traceDE=5) +opt_de1 +mean(ret %*% opt_de1$weights) + +set.seed(123) +opt_de2 <- optimize.portfolio(R=ret, portfolio=pspec2, optimize_method="DEoptim", search_size=4000, traceDE=5) +opt_de2 +mean(ret %*% opt_de2$weights) + +opt_rp1 <- optimize.portfolio(R=ret, portfolio=pspec1, optimize_method="random", search_size=4000) +opt_rp1 +mean(ret %*% opt_rp1$weights) + +opt_rp2 <- optimize.portfolio(R=ret, portfolio=pspec2, optimize_method="random", search_size=4000) +opt_rp2 +mean(ret %*% opt_rp2$weights) From noreply at r-forge.r-project.org Tue Aug 13 02:18:13 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 13 Aug 2013 02:18:13 +0200 (CEST) Subject: [Returnanalytics-commits] r2773 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130813001813.CE1901842E0@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-13 02:18:13 +0200 (Tue, 13 Aug 2013) New Revision: 2773 Modified: pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw Log: modifying constraints examples in the portfolio vignette Modified: pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw 2013-08-12 23:42:56 UTC (rev 2772) +++ pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw 2013-08-13 00:18:13 UTC (rev 2773) @@ -24,7 +24,6 @@ <<>>= library(PortfolioAnalytics) -library(PerformanceAnalytics) # just for edhec data set @ \subsection{Data} @@ -32,7 +31,7 @@ <<>>= data(edhec) -# Use the first 4 indices in edhec for a returns object +# Use the first 4 columns in edhec for a returns object returns <- edhec[, 1:4] print(head(returns, 5)) @@ -43,17 +42,17 @@ \section{Creating the "portfolio" object} The portfolio object is instantiated with the \code{portfolio.spec} function. The main argument to \code{portfolio.spec} is assets, this is a required argument. The assets argument can be a scalar value for the number of assets, a character vector of fund names, or a named vector of seed weights. If seed weights are not specified, an equal weight portfolio will be assumed. -The \code{pspec} object is an S3 object of class "portfolio". When first created, the portfolio object has an element named assets with the seed weights, an element named weight\_seq with a seed sequence of weights if specified, an empty constraints list and an empty objectives list. +The \code{pspec} object is an S3 object of class "portfolio". When first created, the portfolio object has an element named \code{assets} with the seed weights, an element named \code{category\_labels}, an element named \code{weight\_seq} with a seed sequence of weights if specified, an empty constraints list and an empty objectives list. <<>>= # Specify a portfolio object by passing a character vector for the # assets argument. pspec <- portfolio.spec(assets=fund.names) -print(pspec) +print.default(pspec) @ -\section{Adding Constraints} -Adding constraints to the portfolio object is done with \code{add.constraint}. The \code{add.constraint} function is the main interface for adding and/or updating constraints to the portfolio object. This function allows the user to specify the portfolio to add the constraints to, the type of constraints (currently 'weight\_sum', 'box', or 'group'), arguments for the constraint, and whether or not to enable the constraint. If updating an existing constraint, the indexnum argument can be specified. +\section{Adding Constraints to the Portfolio Object} +Adding constraints to the portfolio object is done with \code{add.constraint}. The \code{add.constraint} function is the main interface for adding and/or updating constraints to the portfolio object. This function allows the user to specify the portfolio to add the constraints to, the type of constraints, arguments for the constraint, and whether or not to enable the constraint (\code{enabled=TRUE} is the default). If updating an existing constraint, the indexnum argument can be specified. Here we add a constraint that the weights must sum to 1, or the full investment constraint. <<>>= @@ -61,28 +60,70 @@ pspec <- add.constraint(portfolio=pspec, type="weight_sum", min_sum=1, - max_sum=1, - enabled=TRUE) + max_sum=1) + +# The full investment constraint can also be specified with type="full_investment" +# pspec <- add.constraint(portfolio=pspec, type="full_investment") + +# Another common constraint is that portfolio weights sum to 0. +# This can be specified any of the following ways +# pspec <- add.constraint(portfolio=pspec, type="weight_sum", +# min_sum=0, +# max_sum=0) +# pspec <- add.constraint(portfolio=pspec, type="dollar_neutral") +# pspec <- add.constraint(portfolio=pspec, type="active") @ Here we add box constraints for the asset weights. The minimum weight of any asset must be greater than or equal to 0.05 and the maximum weight of any asset must be less than or equal to 0.4. The values for min and max can be passed in as scalars or vectors. If min and max are scalars, the values for min and max will be replicated as vectors to the length of assets. If min and max are not specified, a minimum weight of 0 and maximum weight of 1 are assumed. Note that min and max can be specified as vectors with different weights for linear inequality constraints. <<>>= +# Add box constraints pspec <- add.constraint(portfolio=pspec, type="box", min=0.05, - max=0.4, - enabled=TRUE) + max=0.4) + +# min and max can also be specified per asset +# pspec <- add.constraint(portfolio=pspec, +# type="box", +# min=c(0.05, 0, 0.08, 0.1), +# max=c(0.4, 0.3, 0.7, 0.55)) + +# A special case of box constraints is long only where min=0 and max=1 +# The default action is long only if min and max are not specified +# pspec <- add.constraint(portfolio=pspec, type="box") +# pspec <- add.constraint(portfolio=pspec, type="long_only") @ The portfolio object now has 2 objects in the constraints list. One object for the sum of weights constraint and another for the box constraint. <<>>= -print(pspec$constraints) +print(pspec) @ -Another common constraint that can be added is a group constraint. Group constraints are currently only supported by the ROI solvers, see the ROI vignette [still need to make this] for examples using group constraints. Box constraints and weight\_sum constraints are required by \code{optimize.portfolio}. Other constraint types will be added. +The \code{summary} function gives a more detailed view of the constraints. +<<>>= +summary(pspec) +@ + +Another common constraint that can be added is a group constraint. Group constraints are currently supported by the ROI, DEoptim, and random portfolio solvers. The following code groups the assets such that the first 3 assets are grouped together labeled GroupA and the fourth asset is in its own group labeled GroupB. The \code{group_min} argument specifies that the sum of the weights in GroupA must be greater than or equal to 0.1 and the sum of the weights in GroupB must be greater than or equal to 0.15. The \code{group_max} argument specifies that the sum of the weights in GroupA must be less than or equal to 0.85 and the sum of the weights in GroupB must be less than or equal to 0.55.The \code{group_labels} argument is optional and is useful for labeling groups in terms of market capitalization, sector, etc. +<<>>= +# Add group constraints +pspec <- add.constraint(portfolio=pspec, type="group", + groups=c(3, 1), + group_min=c(0.1, 0.15), + group_max=c(0.85, 0.55), + group_labels=c("GroupA", "GroupB")) +@ + +TODO +position limit +diversification +turnover +return target +specify constraints as their own objects + \section{Adding Objectives} -Business objectives can be added to the portfolio object with \code{add.objective\_v2}. The \code{add.objective\_v2} function is the main function for adding and/or updating business objectives to the portfolio object. This function allows the user to specify the portfolio to add the objectives to, the type (currently 'return', 'risk', or 'risk\_budget'), name of the objective function, arguments to the objective function, and whether or not to enable the objective. If updating an existing constraint, the indexnum argument can be specified. +Business objectives can be added to the portfolio object with \code{add.objective}. The \code{add.objective} function is the main function for adding and/or updating business objectives to the portfolio object. This function allows the user to specify the portfolio to add the objectives to, the type (currently 'return', 'risk', or 'risk\_budget'), name of the objective function, arguments to the objective function, and whether or not to enable the objective. If updating an existing constraint, the indexnum argument can be specified. Here we add a risk objective to minimize portfolio variance. Note that the name of the function must correspond to a function in R. Many functions are available in the PerformanceAnalytics package. <<>>= From noreply at r-forge.r-project.org Tue Aug 13 19:03:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 13 Aug 2013 19:03:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2774 - in pkg/PerformanceAnalytics/sandbox/pulkit: week1/code week3_4/code week3_4/tests Message-ID: <20130813170310.E4DAA1858EE@r-forge.r-project.org> Author: pulkit Date: 2013-08-13 19:03:10 +0200 (Tue, 13 Aug 2013) New Revision: 2774 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/tests/tests.TriplePenance.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/chart.SharpeEfficient.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/TuW.R Log: unit tests for triple penance Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/chart.SharpeEfficient.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/chart.SharpeEfficient.R 2013-08-13 00:18:13 UTC (rev 2773) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/chart.SharpeEfficient.R 2013-08-13 17:03:10 UTC (rev 2774) @@ -2,18 +2,22 @@ x = checkData(R) columns = ncol(x) - com - permutations<-function (n, r, v = 1:n) - { - if (r == 1) - matrix(v, n, 1) - else if (n == 1) - matrix(v, 1, r) - else { - X <- NULL - for (i in 1:n) X <- rbind(X, cbind(v[i], fn_perm_list(n - - 1, r - 1, v[-i]))) - X - } - } + + mat<-NULL + subset_sum<-function(numbers,target,partial){ + s = sum(partial) + print(s) + if(s==target){ + mat = rbind(mat,partial) + } + + x<-NULL + for(i in 1:length(numbers)){ + n = numbers[i] + remaining = numbers[(i+1):length(numbers)] + subset_sum(remaining,target,c(partial,n)) + } + } + subset_sum(c(1:10),10,0) +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/TuW.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/TuW.R 2013-08-13 00:18:13 UTC (rev 2773) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/TuW.R 2013-08-13 17:03:10 UTC (rev 2774) @@ -31,7 +31,7 @@ #' #' @examples #' TuW(edhec,0.95,"ar") -#' uW(edhec[,1],0.95,"normal") # expected value 103.2573 +#' TuW(edhec[,1],0.95,"normal") # expected value 103.2573 TuW<-function(R,confidence,type=c("ar","normal"),...){ x = checkData(R) Added: pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/tests/tests.TriplePenance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/tests/tests.TriplePenance.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/tests/tests.TriplePenance.R 2013-08-13 17:03:10 UTC (rev 2774) @@ -0,0 +1,11 @@ +library(RUnit) +library(PerformanceAnalytics) +data(edhec) + +test_MaxDD<-function(){ + checkEqualsNumeric(MaxDD(edhec[,1],0.95,"normal"),6.618966,tolerance = 1.0e-6) +} + +test_MinTRL<-function(){ + checkEqualsNumeric(TuW(edhec[,1],0.95,"normal"),103.2573,tolerance = 1.0e-3) +} From noreply at r-forge.r-project.org Tue Aug 13 22:34:50 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 13 Aug 2013 22:34:50 +0200 (CEST) Subject: [Returnanalytics-commits] r2775 - in pkg/PerformanceAnalytics/sandbox/pulkit: . week7 Message-ID: <20130813203450.5F1971854EB@r-forge.r-project.org> Author: pulkit Date: 2013-08-13 22:34:50 +0200 (Tue, 13 Aug 2013) New Revision: 2775 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week7/ pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R Log: Drawdown using Generalized Pareto Distribution Added: pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R 2013-08-13 20:34:50 UTC (rev 2775) @@ -0,0 +1,39 @@ +#'@title +#'Modelling Drawdown using Extreme Value Theory +#' +#"@description +#'It has been shown empirically that Drawdowns can be modelled using Modified Generalized Pareto +#'distribution(MGPD), Generalized Pareto Distribution(GPD) and other particular cases of MGPD such +#'as weibull distribution \eqn{MGPD(\gamma,0,\psi)} and unit exponential distribution\eqn{MGPD(1,0,\psi)} +#' +#' Modified Generalized Pareto Distribution is given by the following formula +#' +#' \deqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} +#' +#' Here \eqn{\gamma{\epsilon}R} is the modifying parameter. When \eqn{\gamma<1} the corresponding densities are +#' strictly decreasing with heavier tail; the GDP is recovered by setting \eqn{\gamma = 1} .\eqn{\gamma \textgreater 1} +#' +#' The GDP is given by the following equation. \eqn{MGPD(1,\eta,\psi)} +#' +#'\deqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m}{\psi}}, if \eta = 0,\end{array}} +#' +#' The weibull distribution is given by the following equation \eqn{MGPD(\gamma,0,\psi)} +#' +#'\deqn{G(m) = 1- e^{-frac{m^\gamma}{\psi}}} +#' +#'The unit exponential distribution is given by the following equation \eqn{MGPD(1,0,\psi)} +#' +#'\deqn{G(m) = 1- e^{-m}} +#' +#' +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset return +#' +#'@references +#'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). Coppead Working Paper Series No. 359. Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. +#' +#' + + + + + From noreply at r-forge.r-project.org Wed Aug 14 00:47:01 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 00:47:01 +0200 (CEST) Subject: [Returnanalytics-commits] r2776 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130813224701.39BBF185513@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 00:47:00 +0200 (Wed, 14 Aug 2013) New Revision: 2776 Modified: pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw pkg/PortfolioAnalytics/sandbox/portfolio_vignette.pdf Log: adding constraint types and specifying constraints as separate objects to the portfolio vignette Modified: pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw 2013-08-13 20:34:50 UTC (rev 2775) +++ pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw 2013-08-13 22:47:00 UTC (rev 2776) @@ -74,7 +74,7 @@ # pspec <- add.constraint(portfolio=pspec, type="active") @ -Here we add box constraints for the asset weights. The minimum weight of any asset must be greater than or equal to 0.05 and the maximum weight of any asset must be less than or equal to 0.4. The values for min and max can be passed in as scalars or vectors. If min and max are scalars, the values for min and max will be replicated as vectors to the length of assets. If min and max are not specified, a minimum weight of 0 and maximum weight of 1 are assumed. Note that min and max can be specified as vectors with different weights for linear inequality constraints. +Here we add box constraints for the asset weights so that the minimum weight of any asset must be greater than or equal to 0.05 and the maximum weight of any asset must be less than or equal to 0.4. The values for min and max can be passed in as scalars or vectors. If min and max are scalars, the values for min and max will be replicated as vectors to the length of assets. If min and max are not specified, a minimum weight of 0 and maximum weight of 1 are assumed. Note that min and max can be specified as vectors with different weights for linear inequality constraints. <<>>= # Add box constraints pspec <- add.constraint(portfolio=pspec, @@ -105,7 +105,7 @@ @ -Another common constraint that can be added is a group constraint. Group constraints are currently supported by the ROI, DEoptim, and random portfolio solvers. The following code groups the assets such that the first 3 assets are grouped together labeled GroupA and the fourth asset is in its own group labeled GroupB. The \code{group_min} argument specifies that the sum of the weights in GroupA must be greater than or equal to 0.1 and the sum of the weights in GroupB must be greater than or equal to 0.15. The \code{group_max} argument specifies that the sum of the weights in GroupA must be less than or equal to 0.85 and the sum of the weights in GroupB must be less than or equal to 0.55.The \code{group_labels} argument is optional and is useful for labeling groups in terms of market capitalization, sector, etc. +Another common constraint that can be added is a group constraint. Group constraints are currently supported by the ROI, DEoptim, and random portfolio solvers. The following code groups the assets such that the first 3 assets are grouped together labeled GroupA and the fourth asset is in its own group labeled GroupB. The \code{group\_min} argument specifies that the sum of the weights in GroupA must be greater than or equal to 0.1 and the sum of the weights in GroupB must be greater than or equal to 0.15. The \code{group\_max} argument specifies that the sum of the weights in GroupA must be less than or equal to 0.85 and the sum of the weights in GroupB must be less than or equal to 0.55.The \code{group\_labels} argument is optional and is useful for labeling groups in terms of market capitalization, sector, etc. <<>>= # Add group constraints pspec <- add.constraint(portfolio=pspec, type="group", @@ -115,22 +115,71 @@ group_labels=c("GroupA", "GroupB")) @ -TODO -position limit -diversification -turnover -return target -specify constraints as their own objects +A position limit constraint can be added to limit the number of assets with non-zero, long, or short positions. The ROI solver used for maximizing return and ETL/ES/cVaR objectives support position limit constraints for \code{max\_pos} (i.e. using the glpk plugin). \code{max\_pos} is not supported for the ROI solver using the quadprog plugin. Note that \code{max\_pos\_long} and \code{max\_pos\_short} are not supported for either ROI solver. Position limit constraints are fully supported for DEoptim and random solvers. +<<>>= +# Add position limit constraint such that we have a maximum number of three assets with non-zero weights. +pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos=3) + +# Can also specify maximum number of long positions and short positions +# pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos_long=3, max_pos_short=3) +@ + +A target diversification can be specified as a constraint. Diversification is defined as $diversification = \sum_{i=1}^N w_i$ for $N$ assets. The optimizers work by applying a penalty if the diversification value is more than 5\% away from \code{div\_target}. +<<>>= +pspec <- add.constraint(portfolio=pspec, type="diversification", div_target=0.7) +@ + +A target turnover can be specified as a constraint. The turnover is calculated from a set of initial weights. The initial weights can be specified, by default they are the seed weights in the portfolio object. The optimizers work by applying a penalty if the turnover value is more than 5\% away from \code{turnover\_target}. Note that the turnover constraint is not currently supported for the ROI solvers. +<<>>= +pspec <- add.constraint(portfolio=pspec, type="turnover", turnover_target=0.2) +@ + +A target mean return can be specified as a constraint. +<<>>= +pspec <- add.constraint(portfolio=pspec, type="return", return_target=0.007) +@ + +This demonstrates adding constraints to the portfolio object. As an alternative to adding constraints directly to the portfolio object, constraints can be specified as separate objects. + +\subsection{specifying Constraints as Separate Objects} +The following examples will demonstrate how to specify constraints as separate objects for all constraints types. + +<<>>= +# full investment constraint +weight_constr <- weight_sum_constraint(min_sum=1, max_sum=1) + +# box constraint +box_constr <- box_constraint(assets=pspec$assets, min=0, max=1) + +# group constraint +group_constr <- group_constraint(assets=pspec$assets, groups=c(3, 1), + group_min=c(0.1, 0.15), + group_max=c(0.85, 0.55), + group_labels=c("GroupA", "GroupB")) + +# position limit constraint +poslimit_constr <- position_limit_constraint(assets=pspec$assets, max_pos=3) + +# diversification constraint +div_constr <- diversification_constraint(div_target=0.7) + +# turnover constraint +to_constr <- turnover_constraint(turnover_target=0.2) + +# target return constraint +ret_constr <- return_constraint(return_target=0.007) +@ + \section{Adding Objectives} Business objectives can be added to the portfolio object with \code{add.objective}. The \code{add.objective} function is the main function for adding and/or updating business objectives to the portfolio object. This function allows the user to specify the portfolio to add the objectives to, the type (currently 'return', 'risk', or 'risk\_budget'), name of the objective function, arguments to the objective function, and whether or not to enable the objective. If updating an existing constraint, the indexnum argument can be specified. Here we add a risk objective to minimize portfolio variance. Note that the name of the function must correspond to a function in R. Many functions are available in the PerformanceAnalytics package. <<>>= -pspec <- add.objective_v2(portfolio=pspec, - type='risk', - name='var', - enabled=TRUE) +pspec <- add.objective(portfolio=pspec, + type='risk', + name='var', + enabled=TRUE) @ The portfolio object now has 1 object in the objectives list for the risk objective we just added. Modified: pkg/PortfolioAnalytics/sandbox/portfolio_vignette.pdf =================================================================== (Binary files differ) From noreply at r-forge.r-project.org Wed Aug 14 01:58:02 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 01:58:02 +0200 (CEST) Subject: [Returnanalytics-commits] r2777 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130813235802.3C589185A84@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 01:58:01 +0200 (Wed, 14 Aug 2013) New Revision: 2777 Added: pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/man/add.constraint.Rd Log: adding factor exposure to supported constraint types Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-13 22:47:00 UTC (rev 2776) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-13 23:58:01 UTC (rev 2777) @@ -40,6 +40,7 @@ export(extractWeights.optimize.portfolio.rebalancing) export(extractWeights.optimize.portfolio) export(extractWeights) +export(factor_exposure_constraint) export(fn_map) export(generatesequence) export(get_constraints) Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-13 22:47:00 UTC (rev 2776) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-13 23:58:01 UTC (rev 2777) @@ -183,12 +183,10 @@ #' #' This is the main function for adding and/or updating constraints in an object of type \code{\link{portfolio}}. #' -#' In general, you will define your constraints as: 'weight_sum', 'box', 'group', 'turnover', 'diversification', or 'position_limit'. -#' #' Special cases for the weight_sum constraint are "full_investment" and "dollar_nuetral" or "active" with appropriate values set for min_sum and max_sum. see \code{\link{weight_sum_constraint}} #' #' @param portfolio an object of class 'portfolio' to add the constraint to, specifying the constraints for the optimization, see \code{\link{portfolio.spec}} -#' @param type character type of the constraint to add or update, currently 'weight_sum', 'box', 'group', 'turnover', 'diversification', or 'position_limit' +#' @param type character type of the constraint to add or update, currently 'weight_sum' (also 'leverage' or 'weight'), 'box', 'group', 'turnover', 'diversification', 'position_limit', 'return', or 'factor_exposure' #' @param enabled TRUE/FALSE #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify constraints @@ -278,6 +276,13 @@ message=message, ...=...) }, + # factor exposure constraint + factor_exposure=, factor_exposures = {tmp_constraint <- factor_exposure_constraint(assets=assets, + type=type, + enabled=enabled, + message=message, + ...=...) + }, # Do nothing and return the portfolio object if type is NULL null = {return(portfolio)} ) @@ -613,6 +618,11 @@ if(inherits(constraint, "return_constraint")){ out$return_target <- constraint$return_target } + if(inherits(constraint, "factor_exposure_constraint")){ + out$B <- constraint$B + out$lower <- constraint$lower + out$upper <- constraint$upper + } } } @@ -781,6 +791,54 @@ return(Constraint) } +#' Constructor for factor exposure constraint +#' +#' This function is called by add.constraint when type="factor_exposure" is specified. see \code{\link{add.constraint}} +#' \code{B} can be either a vector or matrix of risk factor exposures (i.e. betas). +#' If \code{B} is a vector, the length of \code{B} must be equal to the number of +#' assets and lower and upper must be scalars. +#' If \code{B} is a matrix, the number of rows must be equal to the number +#' of assets and the number of columns represent the number of factors. The length +#' of lower and upper must be equal to the number of factors. +#' +#' @param type character type of the constraint +#' @param assets named vector of assets specifying seed weights +#' @param B vector or matrix of risk factor exposures +#' @param lower vector of lower bounds of constraints for risk factor exposures +#' @param upper vector of upper bounds of constraints for risk factor exposures +#' @param enabled TRUE/FALSE +#' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. +#' @param \dots any other passthru parameters to specify risk factor exposure constraints +#' @author Ross Bennett +#' @export +factor_exposure_constraint <- function(type="factor_exposure", assets, B, lower, upper, enabled=TRUE, message=FALSE, ...){ + # Number of assets + nassets <- length(assets) + + # Assume the user has passed in a vector of betas + if(is.vector(B)){ + # The number of betas must be equal to the number of assets + if(length(B) != nassets) stop("length of B must be equal to number of assets") + # The user passed in a vector of betas, lower and upper must be scalars + if(length(lower) != 1) stop("lower must be a scalar") + if(length(upper) != 1) stop("upper must be a scalar") + } + # The user has passed in a matrix for B + if(is.matrix(B)){ + # The number of rows in B must be equal to the number of assets + if(nrow(B) != nassets) stop("number of rows of B must be equal to number of assets") + # The user passed in a matrix for B --> lower and upper must be equal to the number of columns in the beta matrix + if(length(lower) != ncol(B)) stop("length of lower must be equal to the number of columns in the B matrix") + if(length(upper) != ncol(B)) stop("length of upper must be equal to the number of columns in the B matrix") + } + + Constraint <- constraint_v2(type=type, enabled=enabled, constrclass="factor_exposure_constraint", ...) + Constraint$B <- B + Constraint$lower <- lower + Constraint$upper <- upper + return(Constraint) +} + #' function for updating constrints, not well tested, may be broken #' #' can we use the generic update.default function? Modified: pkg/PortfolioAnalytics/man/add.constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/add.constraint.Rd 2013-08-13 22:47:00 UTC (rev 2776) +++ pkg/PortfolioAnalytics/man/add.constraint.Rd 2013-08-13 23:58:01 UTC (rev 2777) @@ -11,8 +11,9 @@ optimization, see \code{\link{portfolio.spec}}} \item{type}{character type of the constraint to add or - update, currently 'weight_sum', 'box', 'group', - 'turnover', 'diversification', or 'position_limit'} + update, currently 'weight_sum' (also 'leverage' or + 'weight'), 'box', 'group', 'turnover', 'diversification', + 'position_limit', 'return', or 'factor_exposure'} \item{enabled}{TRUE/FALSE} @@ -31,10 +32,6 @@ constraints in an object of type \code{\link{portfolio}}. } \details{ - In general, you will define your constraints as: - 'weight_sum', 'box', 'group', 'turnover', - 'diversification', or 'position_limit'. - Special cases for the weight_sum constraint are "full_investment" and "dollar_nuetral" or "active" with appropriate values set for min_sum and max_sum. see Added: pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd 2013-08-13 23:58:01 UTC (rev 2777) @@ -0,0 +1,46 @@ +\name{factor_exposure_constraint} +\alias{factor_exposure_constraint} +\title{Constructor for factor exposure constraint} +\usage{ + factor_exposure_constraint(type = "factor_exposure", + assets, B, lower, upper, enabled = TRUE, + message = FALSE, ...) +} +\arguments{ + \item{type}{character type of the constraint} + + \item{assets}{named vector of assets specifying seed + weights} + + \item{B}{vector or matrix of risk factor exposures} + + \item{lower}{vector of lower bounds of constraints for + risk factor exposures} + + \item{upper}{vector of upper bounds of constraints for + risk factor exposures} + + \item{enabled}{TRUE/FALSE} + + \item{message}{TRUE/FALSE. The default is message=FALSE. + Display messages if TRUE.} + + \item{\dots}{any other passthru parameters to specify + risk factor exposure constraints} +} +\description{ + This function is called by add.constraint when + type="factor_exposure" is specified. see + \code{\link{add.constraint}} \code{B} can be either a + vector or matrix of risk factor exposures (i.e. betas). + If \code{B} is a vector, the length of \code{B} must be + equal to the number of assets and lower and upper must be + scalars. If \code{B} is a matrix, the number of rows must + be equal to the number of assets and the number of + columns represent the number of factors. The length of + lower and upper must be equal to the number of factors. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Wed Aug 14 02:33:11 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 02:33:11 +0200 (CEST) Subject: [Returnanalytics-commits] r2778 - pkg/PortfolioAnalytics/R Message-ID: <20130814003311.CEB36185A98@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 02:33:11 +0200 (Wed, 14 Aug 2013) New Revision: 2778 Modified: pkg/PortfolioAnalytics/R/optFUN.R Log: adding exposure constraint to gmv_opt and maxret_opt Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-13 23:58:01 UTC (rev 2777) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 00:33:11 UTC (rev 2778) @@ -38,6 +38,13 @@ rhs.vec <- c(rhs.vec, constraints$cLO, -constraints$cUP) } + # Add the factor exposures to Amat, dir.vec, and rhs.vec + if(!is.null(constraints$B)){ + Amat <- rbind(Amat, t(B), -t(B)) + dir.vec <- c(dir.vec, rep(">=", 2 * ncol(B))) + rhs.vec <- c(rhs.vec, constraints$lower, -constraints$upper) + } + # set up the quadratic objective ROI_objective <- Q_objective(Q=2*lambda*moments$var, L=-moments$mean) @@ -95,6 +102,13 @@ rhs.vec <- c(rhs.vec, constraints$cLO, -constraints$cUP) } + # Add the factor exposures to Amat, dir.vec, and rhs.vec + if(!is.null(constraints$B)){ + Amat <- rbind(Amat, t(B), -t(B)) + dir.vec <- c(dir.vec, rep(">=", 2 * ncol(B))) + rhs.vec <- c(rhs.vec, constraints$lower, -constraints$upper) + } + # set up the linear objective ROI_objective <- L_objective(L=-moments$mean) # objL <- -moments$mean From noreply at r-forge.r-project.org Wed Aug 14 02:52:53 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 02:52:53 +0200 (CEST) Subject: [Returnanalytics-commits] r2779 - pkg/PortfolioAnalytics/R Message-ID: <20130814005253.3F59918513B@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 02:52:52 +0200 (Wed, 14 Aug 2013) New Revision: 2779 Modified: pkg/PortfolioAnalytics/R/optFUN.R Log: adding exposure constraint to maxret_milp_opt Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 00:33:11 UTC (rev 2778) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 00:52:52 UTC (rev 2779) @@ -192,6 +192,15 @@ rhs <- c(rhs, constraints$cLO, -constraints$cUP) } + # Add the factor exposures to Amat, dir, and rhs + if(!is.null(constraints$B)){ + t.B <- t(B) + zeros <- matrix(data=0, nrow=nrow(t.B), ncol=ncol(t.B)) + Amat <- rbind(Amat, cbind(t.B, zeros), cbind(-t.B, zeros)) + dir <- c(dir, rep(">=", 2 * nrow(t.B))) + rhs <- c(rhs, constraints$lower, -constraints$upper) + } + objL <- c(-moments$mean, rep(0, N)) # Only seems to work if I do not specify bounds From noreply at r-forge.r-project.org Wed Aug 14 03:17:52 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 03:17:52 +0200 (CEST) Subject: [Returnanalytics-commits] r2780 - pkg/PortfolioAnalytics/R Message-ID: <20130814011752.CCC74184FAF@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 03:17:50 +0200 (Wed, 14 Aug 2013) New Revision: 2780 Modified: pkg/PortfolioAnalytics/R/optFUN.R Log: adding exposure constraints to etl_opt Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 00:52:52 UTC (rev 2779) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 01:17:50 UTC (rev 2780) @@ -264,6 +264,14 @@ dir.vec <- c(dir.vec, rep(">=", (n.groups + n.groups))) rhs.vec <- c(rhs.vec, constraints$cLO, -constraints$cUP) } + # Add the factor exposures to Amat, dir, and rhs + if(!is.null(constraints$B)){ + t.B <- t(B) + zeros <- matrix(data=0, nrow=nrow(t.B), ncol=(T+1)) + Amat <- rbind(Amat, cbind(t.B, zeros), cbind(-t.B, zeros)) + dir.vec <- c(dir.vec, rep(">=", 2 * nrow(t.B))) + rhs.vec <- c(rhs.vec, constraints$lower, -constraints$upper) + } ROI_objective <- L_objective(c(rep(0,N), rep(1/(alpha*T),T), 1)) opt.prob <- OP(objective=ROI_objective, From noreply at r-forge.r-project.org Wed Aug 14 04:06:52 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 04:06:52 +0200 (CEST) Subject: [Returnanalytics-commits] r2781 - pkg/PortfolioAnalytics/R Message-ID: <20130814020652.17C9318102B@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 04:06:48 +0200 (Wed, 14 Aug 2013) New Revision: 2781 Modified: pkg/PortfolioAnalytics/R/optFUN.R Log: adding group and exposure constraints to etl_milp_opt Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 01:17:50 UTC (rev 2780) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 02:06:48 UTC (rev 2781) @@ -344,13 +344,41 @@ # Add row for max_pos cardinality constraints tmpAmat <- rbind(tmpAmat, cbind(matrix(0, ncol=m + n + 2, nrow=1), matrix(1, ncol=m, nrow=1))) - + # Set up the rhs vector rhs <- c( rep(0, n), min_sum, max_sum, targetrhs, rep(0, 2*m), max_pos) # Set up the dir vector dir <- c( rep("<=", n), ">=", "<=", targetdir, rep("<=", 2*m), "==") + if(try(!is.null(constraints$groups), silent=TRUE)){ + n.groups <- length(constraints$groups) + Amat.group <- matrix(0, nrow=n.groups, ncol=m) + k <- 1 + l <- 0 + for(i in 1:n.groups){ + j <- constraints$groups[i] + Amat.group[i, k:(l+j)] <- 1 + k <- l + j + 1 + l <- k - 1 + } + if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) + if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) + zeros <- matrix(0, nrow=n.groups, ncol=(m + n + 2)) + tmpAmat <- rbind(tmpAmat, cbind(Amat.group, zeros), cbind(-Amat.group, zeros)) + dir <- c(dir, rep(">=", (n.groups + n.groups))) + rhs <- c(rhs, constraints$cLO, -constraints$cUP) + } + + # Add the factor exposures to Amat, dir, and rhs + if(!is.null(constraints$B)){ + t.B <- t(B) + zeros <- matrix(data=0, nrow=nrow(t.B), ncol=(m + n + 2)) + tmpAmat <- rbind(tmpAmat, cbind(t.B, zeros), cbind(-t.B, zeros)) + dir <- c(dir, rep(">=", 2 * nrow(t.B))) + rhs <- c(rhs, constraints$lower, -constraints$upper) + } + # Linear objective vector objL <- c( rep(0, m), 1, rep(1/n, n) / alpha, 0, rep(0, m)) From noreply at r-forge.r-project.org Wed Aug 14 18:15:37 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 18:15:37 +0200 (CEST) Subject: [Returnanalytics-commits] r2782 - in pkg/PortfolioAnalytics: R sandbox Message-ID: <20130814161537.D28D918418D@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 18:15:37 +0200 (Wed, 14 Aug 2013) New Revision: 2782 Added: pkg/PortfolioAnalytics/sandbox/testing_factor_exposure.R Modified: pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/R/optFUN.R Log: Cleaning up how factor exposures are added to Amat in QP and LP solvers. Adding testing script for factor exposures with optimize_method='ROI'. Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-14 02:06:48 UTC (rev 2781) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-14 16:15:37 UTC (rev 2782) @@ -822,6 +822,7 @@ # The user passed in a vector of betas, lower and upper must be scalars if(length(lower) != 1) stop("lower must be a scalar") if(length(upper) != 1) stop("upper must be a scalar") + B <- matrix(B, ncol=1) } # The user has passed in a matrix for B if(is.matrix(B)){ Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 02:06:48 UTC (rev 2781) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-14 16:15:37 UTC (rev 2782) @@ -40,8 +40,9 @@ # Add the factor exposures to Amat, dir.vec, and rhs.vec if(!is.null(constraints$B)){ - Amat <- rbind(Amat, t(B), -t(B)) - dir.vec <- c(dir.vec, rep(">=", 2 * ncol(B))) + t.B <- t(constraints$B) + Amat <- rbind(Amat, t.B, -t.B) + dir.vec <- c(dir.vec, rep(">=", 2 * nrow(t.B))) rhs.vec <- c(rhs.vec, constraints$lower, -constraints$upper) } @@ -104,8 +105,9 @@ # Add the factor exposures to Amat, dir.vec, and rhs.vec if(!is.null(constraints$B)){ - Amat <- rbind(Amat, t(B), -t(B)) - dir.vec <- c(dir.vec, rep(">=", 2 * ncol(B))) + t.B <- t(constraints$B) + Amat <- rbind(Amat, t.B, -t.B) + dir.vec <- c(dir.vec, rep(">=", 2 * nrow(t.B))) rhs.vec <- c(rhs.vec, constraints$lower, -constraints$upper) } @@ -194,7 +196,7 @@ # Add the factor exposures to Amat, dir, and rhs if(!is.null(constraints$B)){ - t.B <- t(B) + t.B <- t(constraints$B) zeros <- matrix(data=0, nrow=nrow(t.B), ncol=ncol(t.B)) Amat <- rbind(Amat, cbind(t.B, zeros), cbind(-t.B, zeros)) dir <- c(dir, rep(">=", 2 * nrow(t.B))) @@ -266,7 +268,7 @@ } # Add the factor exposures to Amat, dir, and rhs if(!is.null(constraints$B)){ - t.B <- t(B) + t.B <- t(constraints$B) zeros <- matrix(data=0, nrow=nrow(t.B), ncol=(T+1)) Amat <- rbind(Amat, cbind(t.B, zeros), cbind(-t.B, zeros)) dir.vec <- c(dir.vec, rep(">=", 2 * nrow(t.B))) @@ -372,7 +374,7 @@ # Add the factor exposures to Amat, dir, and rhs if(!is.null(constraints$B)){ - t.B <- t(B) + t.B <- t(constraints$B) zeros <- matrix(data=0, nrow=nrow(t.B), ncol=(m + n + 2)) tmpAmat <- rbind(tmpAmat, cbind(t.B, zeros), cbind(-t.B, zeros)) dir <- c(dir, rep(">=", 2 * nrow(t.B))) Added: pkg/PortfolioAnalytics/sandbox/testing_factor_exposure.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_factor_exposure.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_factor_exposure.R 2013-08-14 16:15:37 UTC (rev 2782) @@ -0,0 +1,89 @@ +library(PortfolioAnalytics) +library(ROI) +require(ROI.plugin.quadprog) +require(ROI.plugin.glpk) +library(Rglpk) + +data(edhec) +ret <- edhec[, 1:4] + +# Create portfolio object +pspec <- portfolio.spec(assets=colnames(ret)) +# Leverage constraint +lev_constr <- weight_sum_constraint(min_sum=1, max_sum=1) +# box constraint +lo_constr <- box_constraint(assets=pspec$assets, min=c(0.01, 0.02, 0.03, 0.04), max=0.65) +# group constraint +grp_constr <- group_constraint(assets=pspec$assets, groups=c(2, 1, 1), group_min=0.1, group_max=0.4) +# position limit constraint +pl_constr <- position_limit_constraint(assets=pspec$assets, max_pos=4) + +# Make up a B matrix for an industry factor model +# dummyA, dummyB, and dummyC could be industries, sectors, etc. +B <- cbind(c(1, 1, 0, 0), + c(0, 0, 1, 0), + c(0, 0, 0, 1)) +rownames(B) <- colnames(ret) +colnames(B) <- c("dummyA", "dummyB", "dummyC") +print(B) +lower <- c(0.1, 0.1, 0.1) +upper <- c(0.4, 0.4, 0.4) + +# Industry exposure constraint +# The exposure constraint and group constraint are equivalent to test that they +# result in the same solution +exp_constr <- factor_exposure_constraint(assets=pspec$assets, B=B, lower=lower, upper=upper) + +# objective to minimize variance +var_obj <- portfolio_risk_objective(name="var") +# objective to maximize return +ret_obj <- return_objective(name="mean") +# objective to minimize ETL +etl_obj <- portfolio_risk_objective(name="ETL") + +# group constraint and exposure constraint should result in same solution + +##### minimize var objective ##### +opta <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, grp_constr), + objectives=list(var_obj), + optimize_method="ROI") +opta + +optb <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, exp_constr), + objectives=list(var_obj), + optimize_method="ROI") +optb + +all.equal(opta$weights, optb$weights) + +##### maximize return objective ##### +optc <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, grp_constr), + objectives=list(ret_obj), + optimize_method="ROI") +optc + +optd <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, exp_constr), + objectives=list(ret_obj), + optimize_method="ROI") +optd + +all.equal(optc$weights, optd$weights) + +##### minimize ETL objective ##### +opte <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, grp_constr), + objectives=list(etl_obj), + optimize_method="ROI") +opte + +optf <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, exp_constr), + objectives=list(etl_obj), + optimize_method="ROI") +optf + +all.equal(opte$weights, optf$weights) From noreply at r-forge.r-project.org Wed Aug 14 18:24:03 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 18:24:03 +0200 (CEST) Subject: [Returnanalytics-commits] r2783 - pkg/PortfolioAnalytics/R Message-ID: <20130814162403.9754418446A@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 18:24:03 +0200 (Wed, 14 Aug 2013) New Revision: 2783 Modified: pkg/PortfolioAnalytics/R/constrained_objective.R Log: adding code in constrained_objective_v2 to penalize weights that do not satisfy the factor exposure constraint Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-14 16:15:37 UTC (rev 2782) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-14 16:24:03 UTC (rev 2783) @@ -526,6 +526,23 @@ mult <- 1 out = out + penalty * mult * abs(mean_return - return_target) } # End return constraint penalty + + # penalize weights that violate factor exposure constraints + if(!is.null(constraints$B)){ + t.B <- t(constraints$B) + lower <- constraints$lower + upper <- constraints$upper + mult <- 1 + for(i in 1:nrow(t.B)){ + tmpexp <- as.numeric(t(w) %*% t.B[i, ]) + if(tmpexp < lower[i]){ + out <- out + penalty * mult * (lower[i] - tmpexp) + } + if(tmpexp > upper[i]){ + out <- out + penalty * mult * (tmpexp - upper[i]) + } + } + } # End factor exposure constraint penalty nargs <- list(...) if(length(nargs)==0) nargs <- NULL From noreply at r-forge.r-project.org Wed Aug 14 19:22:28 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 19:22:28 +0200 (CEST) Subject: [Returnanalytics-commits] r2784 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130814172228.ECDE01853B3@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 19:22:28 +0200 (Wed, 14 Aug 2013) New Revision: 2784 Modified: pkg/PortfolioAnalytics/sandbox/testing_factor_exposure.R Log: adding DEoptim and random optimization methods to factor exposure testing Modified: pkg/PortfolioAnalytics/sandbox/testing_factor_exposure.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_factor_exposure.R 2013-08-14 16:24:03 UTC (rev 2783) +++ pkg/PortfolioAnalytics/sandbox/testing_factor_exposure.R 2013-08-14 17:22:28 UTC (rev 2784) @@ -3,6 +3,7 @@ require(ROI.plugin.quadprog) require(ROI.plugin.glpk) library(Rglpk) +library(DEoptim) data(edhec) ret <- edhec[, 1:4] @@ -87,3 +88,43 @@ optf all.equal(opte$weights, optf$weights) + +##### maximize return objective with DEoptim ##### +set.seed(123) +optde1 <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, grp_constr), + objectives=list(ret_obj), + optimize_method="DEoptim", + search_size=2000, + trace=TRUE) +optde1 + +set.seed(123) +optde2 <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, exp_constr), + objectives=list(ret_obj), + optimize_method="DEoptim", + search_size=2000, + trace=TRUE) +optde2 + +all.equal(optde1$weights, optde2$weights) + +##### maximize return objective with random ##### +optrp1 <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, grp_constr), + objectives=list(ret_obj), + optimize_method="random", + search_size=2000, + trace=TRUE) +optrp1 + +optrp2 <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(lev_constr, lo_constr, exp_constr), + objectives=list(ret_obj), + optimize_method="random", + search_size=2000, + trace=TRUE) +optrp2 + +all.equal(optrp1$weights, optrp2$weights) \ No newline at end of file From noreply at r-forge.r-project.org Wed Aug 14 23:33:07 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 23:33:07 +0200 (CEST) Subject: [Returnanalytics-commits] r2785 - pkg/PortfolioAnalytics/R Message-ID: <20130814213307.7D997185212@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 23:33:07 +0200 (Wed, 14 Aug 2013) New Revision: 2785 Modified: pkg/PortfolioAnalytics/R/generics.R Log: adding factor exposures to summary method for optimize.portfolio Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-14 17:22:28 UTC (rev 2784) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-14 21:33:07 UTC (rev 2785) @@ -489,10 +489,37 @@ cat("Turnover Target Constraint:\n") print(constraints$turnover_target) cat("\n") - cat("Realized turnover:\n") + cat("Realized turnover from seed weights:\n") print(turnover(object$weights, wts.init=object$portfolio$assets)) cat("\n") + # Factor exposure constraint + cat("Factor Exposure Constraints:\n") + if(!is.null(constraints$B) & !is.null(constraints$lower) & !is.null(constraints$upper)){ + t.B <- t(constraints$B) + cat("Factor Exposure B Matrix:\n") + print(constraints$B) + cat("\n") + cat("Lower bound on factor exposures, lower:\n") + lower <- constraints$lower + names(lower) <- colnames(constraints$B) + print(lower) + cat("\n") + cat("Upper bound on group weights, group_max:\n") + upper <- constraints$upper + names(upper) <- colnames(constraints$B) + print(upper) + cat("\n") + cat("Realized Factor Exposures:\n") + tmpexp <- vector(mode="numeric", length=nrow(t.B)) + for(i in 1:nrow(t.B)){ + tmpexp[i] <- t(object$weights) %*% t.B[i, ] + } + names(tmpexp) <- rownames(t.B) + print(tmpexp) + cat("\n\n") + } + # Objectives cat(rep("*", 40), "\n", sep="") cat("Objectives\n") From noreply at r-forge.r-project.org Wed Aug 14 23:50:17 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 14 Aug 2013 23:50:17 +0200 (CEST) Subject: [Returnanalytics-commits] r2786 - in pkg/PortfolioAnalytics: R man Message-ID: <20130814215017.39D03184B61@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-14 23:50:16 +0200 (Wed, 14 Aug 2013) New Revision: 2786 Modified: pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd Log: adding default column names and row names to the B vector and B matrix if not specified by the user Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-14 21:33:07 UTC (rev 2785) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-14 21:50:16 UTC (rev 2786) @@ -794,12 +794,18 @@ #' Constructor for factor exposure constraint #' #' This function is called by add.constraint when type="factor_exposure" is specified. see \code{\link{add.constraint}} +#' #' \code{B} can be either a vector or matrix of risk factor exposures (i.e. betas). #' If \code{B} is a vector, the length of \code{B} must be equal to the number of -#' assets and lower and upper must be scalars. +#' assets and lower and upper must be scalars. If \code{B} is passed in as a vector, +#' it will be converted to a matrix with one column. +#' #' If \code{B} is a matrix, the number of rows must be equal to the number #' of assets and the number of columns represent the number of factors. The length -#' of lower and upper must be equal to the number of factors. +#' of lower and upper must be equal to the number of factors. The \code{B} matrix should +#' have column names specifying the factors and row names specifying the assets. +#' Default column names and row names will be assigned if the user passes in a +#' \code{B} matrix without column names or row names. #' #' @param type character type of the constraint #' @param assets named vector of assets specifying seed weights @@ -822,7 +828,8 @@ # The user passed in a vector of betas, lower and upper must be scalars if(length(lower) != 1) stop("lower must be a scalar") if(length(upper) != 1) stop("upper must be a scalar") - B <- matrix(B, ncol=1) + bnames <- names(B) + B <- matrix(B, ncol=1, dimnames=list(bnames)) } # The user has passed in a matrix for B if(is.matrix(B)){ @@ -831,6 +838,14 @@ # The user passed in a matrix for B --> lower and upper must be equal to the number of columns in the beta matrix if(length(lower) != ncol(B)) stop("length of lower must be equal to the number of columns in the B matrix") if(length(upper) != ncol(B)) stop("length of upper must be equal to the number of columns in the B matrix") + if(is.null(colnames(B))){ + # The user has passed in a B matrix without column names specifying factors + colnames(B) <- paste("factor", 1:ncol(B), sep="") + } + if(is.null(rownames(B))){ + # The user has passed in a B matrix without row names specifying assets + rownames(B) <- names(assets) + } } Constraint <- constraint_v2(type=type, enabled=enabled, constrclass="factor_exposure_constraint", ...) Modified: pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd 2013-08-14 21:33:07 UTC (rev 2785) +++ pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd 2013-08-14 21:50:16 UTC (rev 2786) @@ -31,15 +31,26 @@ \description{ This function is called by add.constraint when type="factor_exposure" is specified. see - \code{\link{add.constraint}} \code{B} can be either a - vector or matrix of risk factor exposures (i.e. betas). - If \code{B} is a vector, the length of \code{B} must be - equal to the number of assets and lower and upper must be - scalars. If \code{B} is a matrix, the number of rows must - be equal to the number of assets and the number of - columns represent the number of factors. The length of - lower and upper must be equal to the number of factors. + \code{\link{add.constraint}} } +\details{ + \code{B} can be either a vector or matrix of risk factor + exposures (i.e. betas). If \code{B} is a vector, the + length of \code{B} must be equal to the number of assets + and lower and upper must be scalars. If \code{B} is + passed in as a vector, it will be converted to a matrix + with one column. + + If \code{B} is a matrix, the number of rows must be equal + to the number of assets and the number of columns + represent the number of factors. The length of lower and + upper must be equal to the number of factors. The + \code{B} matrix should have column names specifying the + factors and row names specifying the assets. Default + column names and row names will be assigned if the user + passes in a \code{B} matrix without column names or row + names. +} \author{ Ross Bennett } From noreply at r-forge.r-project.org Thu Aug 15 02:03:25 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 15 Aug 2013 02:03:25 +0200 (CEST) Subject: [Returnanalytics-commits] r2787 - pkg/PerformanceAnalytics/sandbox/pulkit Message-ID: <20130815000325.DDAD4185AC9@r-forge.r-project.org> Author: peter_carl Date: 2013-08-15 02:03:25 +0200 (Thu, 15 Aug 2013) New Revision: 2787 Added: pkg/PerformanceAnalytics/sandbox/pulkit/R/ pkg/PerformanceAnalytics/sandbox/pulkit/inst/ pkg/PerformanceAnalytics/sandbox/pulkit/man/ pkg/PerformanceAnalytics/sandbox/pulkit/src/ pkg/PerformanceAnalytics/sandbox/pulkit/tests/ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ Log: - package structure for testing inclusion and roxygen - please organize your functions into these directories and build the roxygen documentation From noreply at r-forge.r-project.org Thu Aug 15 02:11:37 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 15 Aug 2013 02:11:37 +0200 (CEST) Subject: [Returnanalytics-commits] r2788 - pkg/PerformanceAnalytics/sandbox/Shubhankit Message-ID: <20130815001137.78535185AC9@r-forge.r-project.org> Author: peter_carl Date: 2013-08-15 02:11:37 +0200 (Thu, 15 Aug 2013) New Revision: 2788 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ pkg/PerformanceAnalytics/sandbox/Shubhankit/inst/ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ pkg/PerformanceAnalytics/sandbox/Shubhankit/src/ pkg/PerformanceAnalytics/sandbox/Shubhankit/tests/ pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ Log: - package directory structure - please sort your code into these directories and build roxygen documentation From noreply at r-forge.r-project.org Thu Aug 15 07:40:53 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 15 Aug 2013 07:40:53 +0200 (CEST) Subject: [Returnanalytics-commits] r2789 - pkg/PortfolioAnalytics/R Message-ID: <20130815054053.E9BC7184E8D@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-15 07:40:53 +0200 (Thu, 15 Aug 2013) New Revision: 2789 Modified: pkg/PortfolioAnalytics/R/optFUN.R Log: calculate the column means of R in gmv_opt for case when var is the only objective specified and moments$mean is not calculated Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-15 00:11:37 UTC (rev 2788) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-15 05:40:53 UTC (rev 2789) @@ -14,7 +14,9 @@ # check for a target return if(!is.na(target)) { - Amat <- rbind(Amat, moments$mean) + # If var is the only objective specified, then moments$mean won't be calculated + if(all(moments$mean==0)) col_means <- colMeans(R) + Amat <- rbind(Amat, col_means) dir.vec <- c(dir.vec, "==") rhs.vec <- c(rhs.vec, target) } From noreply at r-forge.r-project.org Thu Aug 15 12:30:02 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 15 Aug 2013 12:30:02 +0200 (CEST) Subject: [Returnanalytics-commits] r2790 - pkg/PerformanceAnalytics/sandbox/pulkit/week7 Message-ID: <20130815103002.AABD118446A@r-forge.r-project.org> Author: pulkit Date: 2013-08-15 12:30:02 +0200 (Thu, 15 Aug 2013) New Revision: 2790 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R Log: Extreme Drawdown in Drawdowns Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R 2013-08-15 05:40:53 UTC (rev 2789) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R 2013-08-15 10:30:02 UTC (rev 2790) @@ -27,13 +27,29 @@ #' #' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset return +#' @param type The type of distribution "gpd","pd","weibull","exponential" +#' @param threshold The threshold beyond which the drawdowns have to be modelled #' #'@references -#'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). Coppead Working Paper Series No. 359. Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. +#'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). Coppead Working Paper Series No. 359. +#'Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. #' #' +DrawdownGPD<-function(R,type=c("gpd","pd","weibull","exponential"),threshold=0.90){ + x = checkData(R) + columns = ncol(R) + columnnames = colnames(R) + type = type[1] + dr = -Drawdowns(R) + dr_sorted = sort(as.vector(dr)) + data = dr_sorted[0.9*nrow(R):nrow(r)] +} + + + + From noreply at r-forge.r-project.org Thu Aug 15 16:40:05 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 15 Aug 2013 16:40:05 +0200 (CEST) Subject: [Returnanalytics-commits] r2791 - pkg/PerformanceAnalytics/sandbox/pulkit/week7 Message-ID: <20130815144005.2C1A3180602@r-forge.r-project.org> Author: pulkit Date: 2013-08-15 16:40:04 +0200 (Thu, 15 Aug 2013) New Revision: 2791 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R Log: editing GPD for drawdowns Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R 2013-08-15 10:30:02 UTC (rev 2790) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R 2013-08-15 14:40:04 UTC (rev 2791) @@ -8,7 +8,7 @@ #' #' Modified Generalized Pareto Distribution is given by the following formula #' -#' \deqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} +#' \dqeqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} #' #' Here \eqn{\gamma{\epsilon}R} is the modifying parameter. When \eqn{\gamma<1} the corresponding densities are #' strictly decreasing with heavier tail; the GDP is recovered by setting \eqn{\gamma = 1} .\eqn{\gamma \textgreater 1} @@ -27,7 +27,7 @@ #' #' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset return -#' @param type The type of distribution "gpd","pd","weibull","exponential" +#' @param type The type of distribution "gpd","pd","weibull" #' @param threshold The threshold beyond which the drawdowns have to be modelled #' #'@references @@ -35,14 +35,28 @@ #'Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. #' #' -DrawdownGPD<-function(R,type=c("gpd","pd","weibull","exponential"),threshold=0.90){ +DrawdownGPD<-function(R,type=c("gpd","pd","weibull"),threshold=0.90){ x = checkData(R) columns = ncol(R) columnnames = colnames(R) type = type[1] dr = -Drawdowns(R) dr_sorted = sort(as.vector(dr)) - data = dr_sorted[0.9*nrow(R):nrow(r)] + data = dr_sorted[(0.9*nrow(R)):nrow(R)] + if(type=="gpd"){ + gpd = fitgpd(data) + return(gpd) + } + if(type=="wiebull"){ + weibull = fitdistr(data,"weibull") + return(weibull) + } + if(type=="pd"){ + scale = min(data) + shape = length(data)/(sum(log(data))-length(data)*log(a)) + } + + } From noreply at r-forge.r-project.org Thu Aug 15 18:32:15 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 15 Aug 2013 18:32:15 +0200 (CEST) Subject: [Returnanalytics-commits] r2792 - in pkg/PortfolioAnalytics: R demo Message-ID: <20130815163215.50A34180602@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-15 18:32:14 +0200 (Thu, 15 Aug 2013) New Revision: 2792 Added: pkg/PortfolioAnalytics/demo/demo_ROI.R Modified: pkg/PortfolioAnalytics/R/optFUN.R Log: Adding demo script for ROI. Fixing error in gmv_opt to add mean vector to Amat. Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-15 14:40:04 UTC (rev 2791) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-15 16:32:14 UTC (rev 2792) @@ -15,8 +15,12 @@ # check for a target return if(!is.na(target)) { # If var is the only objective specified, then moments$mean won't be calculated - if(all(moments$mean==0)) col_means <- colMeans(R) - Amat <- rbind(Amat, col_means) + if(all(moments$mean==0)){ + tmp_means <- colMeans(R) + } else { + tmp_means <- moments$mean + } + Amat <- rbind(Amat, tmp_means) dir.vec <- c(dir.vec, "==") rhs.vec <- c(rhs.vec, target) } Added: pkg/PortfolioAnalytics/demo/demo_ROI.R =================================================================== --- pkg/PortfolioAnalytics/demo/demo_ROI.R (rev 0) +++ pkg/PortfolioAnalytics/demo/demo_ROI.R 2013-08-15 16:32:14 UTC (rev 2792) @@ -0,0 +1,220 @@ +# ROI examples + +# The following objectives can be solved with optimize_method=ROI +# maximize return +# minimum variance +# maximize quadratic utility +# minimize ETL + +library(PortfolioAnalytics) +library(ROI) +library(Rglpk) +require(ROI.plugin.glpk) +require(ROI.plugin.quadprog) + +# Load the returns data +data(edhec) +ret <- edhec[, 1:4] +funds <- colnames(ret) + +# Create portfolio specification +pspec <- portfolio.spec(assets=funds) + +##### Constraints ##### +# Constraints will be specified as separate objects, but could also be added to +# the portfolio object (see the portfolio vignette for examples of specifying +# constraints) + +# Full investment constraint +fi_constr <- weight_sum_constraint(min_sum=1, max_sum=1) + +# Long only constraint +lo_constr <- box_constraint(assets=pspec$assets, min=0, max=1) + +# Box constraints +box_constr <- box_constraint(assets=pspec$assets, + min=c(0.05, 0.04, 0.1, 0.03), + max=c(0.65, 0.45, 0.7, 0.6)) + +# Position limit constraint +pl_constr <- position_limit_constraint(assets=pspec$assets, max_pos=2) + +# Target mean return constraint +ret_constr <- return_constraint(return_target=0.007) + +# Group constraint +group_constr <- group_constraint(assets=pspec$assets, groups=c(1, 2, 1), + group_min=0, group_max=0.5) + +# Factor exposure constraint +# Industry exposures are used in this example, but other factors could be used as well +# Note that exposures to industry factors are similar to group constraints +facexp_constr <- factor_exposure_constraint(assets=pspec$assets, + B=cbind(c(1, 0, 0, 0), + c(0, 1, 1, 0), + c(0, 0, 0, 1)), + lower=c(0.1, 0.15, 0.05), + upper=c(0.45, 0.65, 0.60)) + +##### Objectives ##### +# Return objective +ret_obj <- return_objective(name="mean") + +# Variance objective +var_obj <- portfolio_risk_objective(name="var") + +# ETL objective +etl_obj <- portfolio_risk_objective(name="ETL") + +##### Maximize Return Optimization ##### +# The ROI solver uses the glpk plugin to interface to the Rglpk package for +# objectives to maximimize mean return + +# Full investment and long only constraints +opt_maxret <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr), + objectives=list(ret_obj), + optimize_method="ROI") +opt_maxret + +# Full investment, box, and target return constraints +opt_maxret <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, box_constr, ret_constr), + objectives=list(ret_obj), + optimize_method="ROI") +opt_maxret + +# Full investment, box, and position_limit constraints +opt_maxret <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, box_constr, pl_constr), + objectives=list(ret_obj), + optimize_method="ROI") +opt_maxret + +# Full investment, box, and group constraints +opt_maxret <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, box_constr, group_constr), + objectives=list(ret_obj), + optimize_method="ROI") +opt_maxret + +# Full investment, box, and factor exposure constraints +opt_maxret <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, box_constr, facexp_constr), + objectives=list(ret_obj), + optimize_method="ROI") +opt_maxret + +##### Minimize Variance Optimization ##### +# The ROI solver uses the quadprog plugin to interface to the quadprog package for +# objectives to minimize variance + +# Global minimum variance portfolio. Only specify the full investment constraint +opt_minvar <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr), + objectives=list(var_obj), + optimize_method="ROI") +opt_minvar + +# Full investment, box, and target mean_return constraints +opt_minvar <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, box_constr, ret_constr), + objectives=list(var_obj), + optimize_method="ROI") +opt_minvar + +# Full investment, box, and group constraints +opt_minvar <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, box_constr, group_constr), + objectives=list(var_obj), + optimize_method="ROI") +opt_minvar + +# Full investment, box, and exposure constraints +opt_minvar <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, box_constr, facexp_constr), + objectives=list(var_obj), + optimize_method="ROI") +opt_minvar + +##### Maximize Quadratic Utility Optimization ##### +# The ROI solver uses the quadprog plugin to interface to the guadprog package for +# objectives to maximimize quadratic utility + +# Create the variance objective with a large risk_aversion paramater +# A large risk_aversion parameter will approximate the global minimum variance portfolio +var_obj <- portfolio_risk_objective(name="var", risk_aversion=1e4) + +# Full investment and box constraints +opt_qu <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, box_constr), + objectives=list(ret_obj, var_obj), + optimize_method="ROI") +opt_qu + +# Change the risk_aversion parameter in the variance objective to a small number +# A small risk_aversion parameter will approximate the maximum portfolio +var_obj$risk_aversion <- 1e-4 + +# Full investment and long only constraints +opt_qu <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr), + objectives=list(ret_obj, var_obj), + optimize_method="ROI") +opt_qu + +# Change the risk_aversion parameter to a more reasonable value +var_obj$risk_aversion <- 0.25 +# Full investment, long only, and factor exposure constraints +opt_qu <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr, facexp_constr), + objectives=list(ret_obj, var_obj), + optimize_method="ROI") +opt_qu + +# Full investment, long only, target return, and group constraints +opt_qu <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr, ret_constr, group_constr), + objectives=list(ret_obj, var_obj), + optimize_method="ROI") +opt_qu + +##### Minimize ETL Optimization ##### +# The ROI solver uses the glpk plugin to interface to the Rglpk package for +# objectives to minimimize expected tail loss + +# Full investment and long only constraints +opt_minetl <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr), + objectives=list(etl_obj), + optimize_method="ROI") +opt_minetl + +# Full investment, box, and target return constraints +opt_minetl <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr, ret_constr), + objectives=list(etl_obj), + optimize_method="ROI") +opt_minetl + +# Full investment, long only, and position limit constraints +opt_minetl <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr, pl_constr), + objectives=list(etl_obj), + optimize_method="ROI") +opt_minetl + +# Full investment, long only, and group constraints +opt_minetl <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr, group_constr), + objectives=list(etl_obj), + optimize_method="ROI") +opt_minetl + +# Full investment, long only, and factor exposure constraints +opt_minetl <- optimize.portfolio(R=ret, portfolio=pspec, + constraints=list(fi_constr, lo_constr, facexp_constr), + objectives=list(etl_obj), + optimize_method="ROI") +opt_minetl + From noreply at r-forge.r-project.org Fri Aug 16 06:40:31 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 06:40:31 +0200 (CEST) Subject: [Returnanalytics-commits] r2793 - in pkg/PortfolioAnalytics: R sandbox Message-ID: <20130816044031.BC2D9185AA3@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-16 06:40:31 +0200 (Fri, 16 Aug 2013) New Revision: 2793 Added: pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw pkg/PortfolioAnalytics/sandbox/ROI_vignette.pdf Modified: pkg/PortfolioAnalytics/R/applyFUN.R pkg/PortfolioAnalytics/R/charts.ROI.R Log: Adding ROI vignette Modified: pkg/PortfolioAnalytics/R/applyFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-15 16:32:14 UTC (rev 2792) +++ pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-16 04:40:31 UTC (rev 2793) @@ -23,6 +23,7 @@ nargs <- c(nargs, moments(R)) nargs$R <- R + #nargs$invert=FALSE # match the FUN arg to a risk or return function switch(FUN, Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-15 16:32:14 UTC (rev 2792) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-16 04:40:31 UTC (rev 2793) @@ -152,7 +152,7 @@ par(mar=c(4,4,4,2)) chart.Scatter.ROI(ROI, R, rp=rp, portfolio=portfolio, return.col=return.col, risk.col=risk.col, ..., element.color=element.color, cex.axis=cex.axis, main=main) par(mar=c(2,4,0,2)) - chart.Weights.ROI(ROI, neighbors=neighbors, ..., main="", las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=ce.axis) + chart.Weights.ROI(ROI, neighbors=neighbors, ..., main="", las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis) par(op) } Added: pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw (rev 0) +++ pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw 2013-08-16 04:40:31 UTC (rev 2793) @@ -0,0 +1,141 @@ +\documentclass[12pt,letterpaper,english]{article} +\usepackage[OT1]{fontenc} +\usepackage{Sweave} +\usepackage{verbatim} +\usepackage{Rd} +\usepackage{amsmath} + +\begin{document} + +\title{Using the ROI solvers with PortfolioAnalytics} +\author{Ross Bennett} + +\maketitle + +\begin{abstract} +The purpose of this vignette is to demonstrate a sample of the optimzation problems that can be solved in PortfolioAnalytics with the ROI solvers. See \code{demo(demo\_ROI)} for a more complete set of examples. +\end{abstract} + +\tableofcontents + +\section{Getting Started} +\subsection{Load Packages} +Load the necessary packages. +<<>>= +suppressMessages(library(PortfolioAnalytics)) +suppressMessages(library(Rglpk)) +suppressMessages(library(foreach)) +suppressMessages(library(iterators)) +suppressMessages(library(ROI)) +suppressMessages(require(ROI.plugin.glpk)) +suppressMessages(require(ROI.plugin.quadprog)) +@ + +\subsection{Data} +The edhec data set from the PerformanceAnalytics package will be used as example data. +<<>>= +data(edhec) + +# Use the first 4 columns in edhec for a returns object +returns <- edhec[, 1:4] +print(head(returns, 5)) + +# Get a character vector of the fund names +funds <- colnames(returns) +@ + + +\section{Maximizing Mean Return} +The objective to maximize mean return is a linear problem of the form: +\begin{equation*} + \begin{aligned} + & \underset{\boldsymbol{w}}{\text{maximize}} + & & \hat{\boldsymbol{\mu}}' \boldsymbol{w} \\ + \end{aligned} +\end{equation*} + +Where $\hat{\boldsymbol{\mu}}$ is the estimated mean asset returns and $\boldsymbol{w}$ is the set of weights. Because this is a linear problem, it is well suited to be solved using a linear programming solver. For these types of problems, PortfolioAnalytics uses the ROI package with the glpk plugin. + +\subsection{Portfolio Object} + +The first step is to create the portfolio object. Then add constraints and a return objective. +<<>>= +# Create portfolio object +portf_maxret <- portfolio.spec(assets=funds) + +# Add constraints to the portfolio object +portf_maxret <- add.constraint(portfolio=portf_maxret, type="full_investment") +portf_maxret <- add.constraint(portfolio=portf_maxret, type="box", + min=c(0.02, 0.05, 0.03, 0.02), + max=c(0.55, 0.6, 0.65, 0.5)) + +# Add objective to the portfolio object +portf_maxret <- add.objective(portfolio=portf_maxret, type="return", name="mean") +@ + +The print method for the portfolio object shows a high level overview while the summary method shows much more detail of the assets, constraints, and objectives that are specified in the portfolio object. +<<>>= +print(portf_maxret) +summary(portf_maxret) +@ + +\subsection{Optimization} +The next step is to run the optimization. Note that \code{optimize\_method="ROI"} is specified in the call to \code{optimize.portfolio} to select the solver used for the optimization. +<<>>= +# Run the optimization +opt_maxret <- optimize.portfolio(R=returns, portfolio=portf_maxret, optimize_method="ROI") +@ + +The print method for the \code{opt\_maxret} object shows the call, optimal weights, and the objective measure +<<>>= +print(opt_maxret) +@ + +The sumary method for the \code{opt\_maxret} object shows details of the object with constraints, objectives, and other portfolio statistics. +<<>>= +summary(opt_maxret) +@ + + +The \code{opt\_maxret} object is of class \code{optimize.portfolio.ROI} and contains the following elements. Objects of class \code{optimize.portfolio.ROI} are S3 objects and elements can be accessed with the \code{\$} operator. +<<>>= +names(opt_maxret) +@ + +The optimal weights and value of the objective function at the optimum can be accessed with the \code{extractStats} function. +<<>>= +extractStats(opt_maxret) +@ + +The optimal weights can be accessed with the \code{extractWeights} function. +<<>>= +extractWeights(opt_maxret) +@ + +\subsection{Visualization} +The chart of the optimal weights as well as the box constraints can be created with \code{chart.Weights.ROI}. The blue dots are the optimal weights and the gray triangles are the \code{min} and \code{max} of the box constraints. +<>= +chart.Weights.ROI(opt_maxret) +@ + +The optimal portfolio can be plotted in risk-return space along with other feasible portfolios. The return metric is defined in the \code{return.col} argument and the risk metric is defined in the \code{risk.col} argument. The scatter chart includes the optimal portfolio (blue dot) and other feasible portfolios (gray circles) to show the overall feasible space given the constraints. By default, if \code{rp} is not passed in, the feasible portfolios are generated with \code{random\_portfolios} to satisfy the constraints of the portfolio object. + +Volatility as the risk metric +<>= +chart.Scatter.ROI(opt_maxret, R=returns,return.col="mean", risk.col="sd", main="Maximum Return") +@ + +Expected tail loss as the risk metric +<>= +chart.Scatter.ROI(opt_maxret, R=returns, return.col="mean", risk.col="ETL", main="Maximum Return", invert=FALSE, p=0.9) +@ + +\subsection{Backtesting} +An out of sample backtest is run with \code{optimize.portfolio.rebalancing}. In this example, an initial training period of 36 months is used and the portfolio is rebalanced quarterly. +<<>>= +bt_maxret <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_maxret, optimize_method="ROI", rebalance_on="quarters", training_period=36, trace=TRUE) +@ + +The \code{bt\_maxret} object is a list containing the optimal weights and objective measure at each rebalance period. + +\end{document} \ No newline at end of file Added: pkg/PortfolioAnalytics/sandbox/ROI_vignette.pdf =================================================================== (Binary files differ) Property changes on: pkg/PortfolioAnalytics/sandbox/ROI_vignette.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream From noreply at r-forge.r-project.org Fri Aug 16 10:53:23 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 10:53:23 +0200 (CEST) Subject: [Returnanalytics-commits] r2794 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R data src vignettes week1/code week1/vignette week2/code week2/vignette week3_4/code week3_4/vignette week5 week6 week7 Message-ID: <20130816085323.DF8511853C2@r-forge.r-project.org> Author: pulkit Date: 2013-08-16 10:53:23 +0200 (Fri, 16 Aug 2013) New Revision: 2794 Added: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.SharpeEfficient.R pkg/PerformanceAnalytics/sandbox/pulkit/R/edd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/data/ pkg/PerformanceAnalytics/sandbox/pulkit/data/data.csv pkg/PerformanceAnalytics/sandbox/pulkit/data/data1.csv pkg/PerformanceAnalytics/sandbox/pulkit/data/data3.csv pkg/PerformanceAnalytics/sandbox/pulkit/data/ret.csv pkg/PerformanceAnalytics/sandbox/pulkit/src/moment.c pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw Removed: pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/MinTRL.R pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/PSRopt.R pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/ProbSharpeRatio.R pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/chart.SharpeEfficient.R pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/data.csv pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/moment.c pkg/PerformanceAnalytics/sandbox/pulkit/week1/code/table.PSR.R pkg/PerformanceAnalytics/sandbox/pulkit/week1/vignette/ProbSharpe.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/week2/code/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/week2/code/BenchmarkSR.R pkg/PerformanceAnalytics/sandbox/pulkit/week2/code/SRIndifferenceCurve.R pkg/PerformanceAnalytics/sandbox/pulkit/week2/vignette/SharepRatioEfficientFrontier.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/GoldenSection.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/MonteSimulTriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/TriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/TuW.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/data1.csv pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/code/table.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/week3_4/vignette/TriplePenance.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/week5/EDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/week5/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/week5/REDDCOPS.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/week5/REM.R pkg/PerformanceAnalytics/sandbox/pulkit/week5/chart.REDD.R pkg/PerformanceAnalytics/sandbox/pulkit/week5/edd.R pkg/PerformanceAnalytics/sandbox/pulkit/week5/redd.R pkg/PerformanceAnalytics/sandbox/pulkit/week5/ret.csv pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBetaMulti.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R pkg/PerformanceAnalytics/sandbox/pulkit/week6/data.csv pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/week1/vignette/ProbSharpe.pdf Log: Moving files into R directory Copied: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R (from rev 2790, pkg/PerformanceAnalytics/sandbox/pulkit/week2/code/BenchmarkPlots.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-16 08:53:23 UTC (rev 2794) @@ -0,0 +1,145 @@ +#'@title Benchmark Sharpe Ratio Plots +#' +#'@description +#'Benchmark Sharpe Ratio Plots are used to give the relation ship between the +#'Benchmark Sharpe Ratio and average correlation,average sharpe ratio or the number of #'strategies keeping other parameters constant. Here average Sharpe ratio , average #'correlation stand for the average of all the strategies in the portfolio. The original +#'point of the return series is also shown on the plots. +#' +#'The equation for the Benchamark Sharpe Ratio is. +#' +#'\deqn{SR_B = \overline{SR}\sqrt{\frac{S}{1+(S-1)\overline{\rho}}}} +#' +#'Here \eqn{S} is the number of strategies and \eqn{\overline{\rho}} is the average +#'correlation across off diagonal elements and is given by +#' +#'\deqn{\overline{\rho} = \frac{2\sum_{s=1}^{S} \sum_{t=s+1}^{S} \rho_{S,t}}{S(S-1)}} +#' +#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#'@param ylab set the y-axis label, as in \code{\link{plot}} +#'@param xlab set the x-axis label, as in \code{\link{plot}} +#'@param main set the chart title, as in \code{\link{plot}} +#'@param element.color set the element.color value as in \code{\link{plot}} +#'@param lwd set the width of the line, as in \code{\link{plot}} +#'@param pch set the pch value, as in \code{\link{plot}} +#'@param cex set the cex value, as in \code{\link{plot}} +#'@param cex.axis set the cex.axis value, as in \code{\link{plot}} +#'@param cex.main set the cex.main value, as in \code{\link{plot}} +#'@param vs The values against which benchmark SR has to be plotted. can be +#'"sharpe","correlation" or "strategies" +#'@param ylim set the ylim value, as in \code{\link{plot}} +#'@param xlim set the xlim value, as in \code{\link{plot}} + +#'@references +#'Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: +#'A Sharpe Ratio Indifference Curve Approach (January 2013). Algorithmic Finance, +#'Vol. 2, No. 1 (2013). +#' +#'@seealso \code{\link{plot}} +#'@keywords ts multivariate distribution models hplot +#'@examples +#' +#'chart.BenchmarkSR(edhec,vs="strategies") +#'chart.BenchmarkSR(edhec,vs="sharpe") +#'@export + +chart.BenchmarkSR<-function(R=NULL,main=NULL,ylab = NULL,xlab = NULL,element.color="darkgrey",lwd = 2,pch = 1,cex = 1,cex.axis=0.8,cex.lab = 1,cex.main = 1,vs=c("sharpe","correlation","strategies"),xlim = NULL,ylim = NULL,...){ + + # DESCRIPTION: + # Draws Benchmark SR vs various variables such as average sharpe , + # average correlation and the number of strategies + + # INPUT: + # The Return Series of the portfolio is taken as the input. The Return + # Series can be an xts, vector, matrix, data frame, timeSeries or zoo object of + # asset returns. + + # All other inputs are the same as "plot" and are principally included + # so that some sensible defaults could be set. + + # vs parameter takes the value against which benchmark sr has to be plotted + + # FUNCTION: + if(!is.null(R)){ + x = checkData(R) + columns = ncol(x) + avgSR = mean(SharpeRatio(R)) + } + else{ + if(is.null(avgSR) | is.null(S)){ + stop("The average SR and the number of strategies should not be NULL") + } + + } + corr = table.Correlation(edhec,edhec) + corr_avg = 0 + for(i in 1:(columns-1)){ + for(j in (i+1):columns){ + corr_avg = corr_avg + corr[(i-1)*columns+j,] + } + } + corr_avg = corr_avg*2/(columns*(columns-1)) + if(vs=="sharpe"){ + if(is.null(ylab)){ + ylab = "Benchmark Sharpe Ratio" + } + if(is.null(xlab)){ + xlab = "Average Sharpe Ratio" + } + if(is.null(main)){ + main = "Benchmark Sharpe Ratio vs Average Sharpe Ratio" + } + sr = seq(0,1,length.out=30) + SR_B = sr*sqrt(columns/(1+(columns-1)*corr_avg[1,1])) + plot(sr,SR_B,type="l",xlab=xlab,ylab=ylab,main=main,lwd = lwd,pch=pch,cex = cex,cex.lab = cex.lab) + points(avgSR,BenchmarkSR(R),col="blue",pch=10) + text(avgSR,BenchmarkSR(R),"Return Series ",pos=4) + } + if(vs=="correlation"){ + + if(is.null(ylab)){ + ylab = "Benchmark Sharpe Ratio" + } + if(is.null(xlab)){ + xlab = "Average Correlation" + } + if(is.null(main)){ + main = "Benchmark Sharpe Ratio vs Correlation" + } + rho = seq(0,1,length.out=30) + SR_B = avgSR*sqrt(columns/(1+(columns-1)*rho)) + plot(rho,SR_B,type="l",xlab=xlab,ylab=ylab,main=main,lwd = lwd,pch=pch,cex = cex,cex.lab = cex.lab) + points(corr_avg[1,1],BenchmarkSR(R),col="blue",pch=10) + text(corr_avg[1,1],BenchmarkSR(R),"Return Series ",pos=4) + } + if(vs=="strategies"){ + + if(is.null(ylab)){ + ylab = "Benchmark Sharpe Ratio" + } + if(is.null(xlab)){ + xlab = "Number of Strategies" + } + if(is.null(main)){ + main = "Benchmark Sharpe Ratio vs Number of Strategies" + } + n = seq(2,100,length.out=20) + SR_B = avgSR*sqrt(n/(1+(n-1)*corr_avg[1,1])) + plot(n,SR_B,type="l",xlab=xlab,ylab=ylab,main=main,lwd = lwd,pch=pch,cex = cex,cex.lab = cex.lab) + points(columns,BenchmarkSR(R),col="blue",pch=10) + text(columns,BenchmarkSR(R),"Return Series ",pos=4) + } + +} + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2013 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: BenchmarkSRPlots.R $ +# +############################################################################### Copied: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R (from rev 2790, pkg/PerformanceAnalytics/sandbox/pulkit/week2/code/BenchmarkSR.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R 2013-08-16 08:53:23 UTC (rev 2794) @@ -0,0 +1,70 @@ +#'@title +#'Benchmark Sharpe Ratio +#' +#'@description +#'The benchmark SR is a linear function of the average +#' SR of the individual strategies, and a decreasing +#' convex function of the number of strategies and the +#' average pairwise correlation. The Returns are given as +#' the input with the benchmark Sharpe Ratio as the output. +#' +#'@aliases BenchmarkSR +#'\deqn{SR_B = \bar{SR}\sqrt{\frac{S}{1+(S-1)\bar{\rho}}}} +#' +#'Here \eqn{\bar{SR}} is the average SR of the portfolio and \eqn{\bar{\rho}} +#'is the average correlation across off-diagonal elements +#' +#'@param R a vector, matrix, data frame,timeseries or zoo object of asset returns +#' +#'@references +#'Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: +#'A Sharpe Ratio Indifference Curve Approach (January 2013). Algorithmic Finance, +#'Vol. 2, No. 1 (2013). +#' +#'@examples +#' +#'data(edhec) +#'BenchmarkSR(edhec) #expected 0.393797 +#' +#'@export +#' +BenchmarkSR<-function(R){ + # DESCRIPTION: + # Returns the Value of the Benchmark Sharpe Ratio. + + # INPUT: + # The return series of all the series in the portfolio is taken as the input + # The return series can be a vector, matrix, data frame,timeseries or zoo + # object of asset returns. + + # FUNCTION: + x = checkData(R) + columns = ncol(x) + #TODO : What to do if the number of columns is only one ? + if(columns == 1){ + stop("The number of return series should be greater than 1") + } + SR = SharpeRatio(x) + sr_avg = mean(SR) + corr = table.Correlation(edhec,edhec) + corr_avg = 0 + for(i in 1:(columns-1)){ + for(j in (i+1):columns){ + corr_avg = corr_avg + corr[(i-1)*columns+j,1] + } + } + corr_avg = corr_avg*2/(columns*(columns-1)) + SR_Benchmark = sr_avg*sqrt(columns/(1+(columns-1)*corr_avg)) + return(SR_Benchmark) +} +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2013 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: BenchmarkSR.R $ +# +############################################################################### \ No newline at end of file Copied: pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R (from rev 2790, pkg/PerformanceAnalytics/sandbox/pulkit/week6/CDaRMultipath.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-16 08:53:23 UTC (rev 2794) @@ -0,0 +1,142 @@ +#'@title +#'Conditional Drawdown at Risk for Multiple Sample Path +#' +#'@desctipion +#' +#' For a given \eqn{\alpha \epsilon [0,1]} in the multiple sample-paths setting,CDaR, +#' denoted by \eqn{D_{\alpha}(w)}, is the average of \eqn{(1-\alpha).100\%} drawdowns +#' of the set {d_st|t=1,....T,s = 1,....S}, and is defined by +#' +#' \deqn{D_\alpha(w) = \max_{{q_st}{\epsilon}Q}{\sum_{s=1}^S}{\sum_{t=1}^T}{p_s}{q_st}{d_st}}, +#' +#' where +#' +#' \deqn{Q = \left\{ \left\{ q_st\right\}_{s,t=1}^{S,T} | \sum_{s = 1}^S \sum_{t = 1}^T{p_s}{q_st} = 1, 0{\leq}q_st{\leq}\frac{1}{(1-\alpha)T}, s = 1....S, t = 1.....T \right\}} +#' +#' For \eqn{\alpha = 1} , \eqn{D_\alpha(w)} is defined by (3) with the constraint +#' \eqn{0{\leq}q_st{\leq}\frac{1}{(1-\alpha)T}}, in Q replaced by \eqn{q_st{\geq}0} +#' +#' As in the case of a single sample-path, the CDaR definition includes two special cases : +#' (i) for \eqn{\alpha = 1},\eqn{D_1(w)} is the maximum drawdown, also called drawdown from +#' peak-to-valley, and (ii) for \eqn{\alpha} = 0, \eqn{D_\alpha(w)} is the average drawdown +#' +#'@param R an xts, vector, matrix,data frame, timeSeries or zoo object of multiple sample path returns +#'@param ps the probability for each sample path +#'@param scen the number of scenarios in the Return series +#'@param instr the number of instruments in the Return series +#'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic +#'chaining (FALSE) to aggregate returns, default TRUE +#'@param p confidence level for calculation ,default(p=0.95) +#'@param \dots any other passthru parameters +#' +#'@references +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) +#' with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, +#' September 2012 + + +CdarMultiPath<-function (R,ps,sample, geometric = TRUE,p = 0.95, ...) +{ + + #p = .setalphaprob(p) + R = na.omit(R) + nr = nrow(R) + + # ERROR HANDLING and TESTING + #if(sample == instr){ + + #} + + multicdar<-function(x){ + # checking if nr*p is an integer + if((p*nr) %% 1 == 0){ + drawdowns = as.matrix(Drawdowns(x)) + drawdowns = drawdowns(order(drawdowns),decreasing = TRUE) + # average of the drawdowns greater the (1-alpha).100% largest drawdowns + result = (1/((1-p)*nr(x)))*sum(drawdowns[((1-p)*nr):nr]) + } + else{ # if nr*p is not an integer + #f.obj = c(rep(0,nr),rep((1/(1-p))*(1/nr),nr),1) + drawdowns = -as.matrix(Drawdowns(x)) + + # The objective function is defined + f.obj = NULL + for(i in 1:sample){ + for(j in 1:nr){ + f.obj = c(f.obj,ps[i]*drawdowns[j,i]) + } + } + f.con = NULL + # constraint 1: ps.qst = 1 + for(i in 1:sample){ + for(j in 1:nr){ + f.con = c(f.con,ps[i]) + } + } + f.con = matrix(f.con,nrow =1) + f.dir = "==" + f.rhs = 1 + # constraint 2 : qst >= 0 + for(i in 1:sample){ + for(j in 1:nr){ + r<-rep(0,sample*nr) + r[(i-1)*sample+j] = 1 + f.con = rbind(f.con,r) + } + } + f.dir = c(f.dir,rep(">=",sample*nr)) + f.rhs = c(f.rhs,rep(0,sample*nr)) + + + # constraint 3 : qst =< 1/(1-alpha)*T + for(i in 1:sample){ + for(j in 1:nr){ + r<-rep(0,sample*nr) + r[(i-1)*sample+j] = 1 + f.con = rbind(f.con,r) + } + } + f.dir = c(f.dir,rep("<=",sample*nr)) + f.rhs = c(f.rhs,rep(1/(1-p)*nr,sample*nr)) + + # constraint 1: + # f.con = cbind(-diag(nr),diag(nr),1) + # f.dir = c(rep(">=",nr)) + # f.rhs = c(rep(0,nr)) + + #constatint 2: + # ut = diag(nr) + # ut[-1,-nr] = ut[-1,-nr] - diag(nr - 1) + # f.con = rbind(f.con,cbind(ut,matrix(0,nr,nr),1)) + # f.dir = c(rep(">=",nr)) + # f.rhs = c(f.rhs,-R) + + #constraint 3: + # f.con = rbind(f.con,cbind(matrix(0,nr,nr),diag(nr),1)) + # f.dir = c(rep(">=",nr)) + # f.rhs = c(f.rhs,rep(0,nr)) + + #constraint 4: + # f.con = rbind(f.con,cbind(diag(nr),matrix(0,nr,nr),1)) + # f.dir = c(rep(">=",nr)) + # f.rhs = c(f.rhs,rep(0,nr)) + val = lp("max",f.obj,f.con,f.dir,f.rhs) + result = val$objval + } +} + R = checkData(R, method = "matrix") + result = matrix(nrow = 1, ncol = ncol(R)/sample) + + for (i in 1:(ncol(R)/sample)) { + ret<-NULL + for(j in 1:sample){ + ret<-cbind(ret,R[,(j-1)*ncol(R)/sample+i]) + } + result[i] <- multicdar(ret) + } + dim(result) = c(1, NCOL(R)/sample) + colnames(result) = colnames(R)[1:ncol(R)/sample] + rownames(result) = paste("Conditional Drawdown ", + p * 100, "%", sep = "") + return(result) +} Copied: pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R (from rev 2790, pkg/PerformanceAnalytics/sandbox/pulkit/week6/CdaR.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R 2013-08-16 08:53:23 UTC (rev 2794) @@ -0,0 +1,73 @@ +CDaR<-function (R, weights = NULL, geometric = TRUE, invert = TRUE, p = 0.95, ...) +{ + #p = .setalphaprob(p) + if (is.vector(R) || ncol(R) == 1) { + R = na.omit(R) + nr = nrow(R) + # checking if nr*p is an integer + if((p*nr) %% 1 == 0){ + drawdowns = -Drawdowns(R) + drawdowns = drawdowns[order(drawdowns),increasing = TRUE] + print(drawdowns) + # average of the drawdowns greater the (1-alpha).100% largest drawdowns + result = -(1/((1-p)*nr))*sum(drawdowns[((p)*nr):nr]) + } + else{ + # CDaR using the CVaR function + # result = ES(Drawdowns(R),p=p,method="historical") + # if nr*p is not an integer + f.obj = c(rep(0,nr),rep(((1/(1-p))*(1/nr)),nr),1) + + # k varies from 1:nr + # constraint : -uk +zk +y >= 0 + f.con = cbind(-diag(nr),diag(nr),1) + f.dir = c(rep(">=",nr)) + f.rhs = c(rep(0,nr)) + + # constraint : uk -uk-1 >= -rk + ut = diag(nr) + ut[-1,-nr] = ut[-1,-nr] - diag(nr - 1) + f.con = rbind(f.con,cbind(ut,matrix(0,nr,nr),0)) + f.dir = c(rep(">=",nr)) + f.rhs = c(f.rhs,-R) + + # constraint : zk >= 0 + f.con = rbind(f.con,cbind(matrix(0,nr,nr),diag(nr),0)) + f.dir = c(rep(">=",nr)) + f.rhs = c(f.rhs,rep(0,nr)) + + # constraint : uk >= 0 + f.con = rbind(f.con,cbind(diag(nr),matrix(0,nr,nr),0)) + f.dir = c(rep(">=",nr)) + f.rhs = c(f.rhs,rep(0,nr)) + + val = lp("min",f.obj,f.con,f.dir,f.rhs) + val_disp = lp("min",f.obj,f.con,f.dir,f.rhs,compute.sens = TRUE ) + result = -val$objval + } + if (invert) + result <- -result + + return(result) + } + else { + R = checkData(R, method = "matrix") + if (is.null(weights)) { + result = matrix(nrow = 1, ncol = ncol(R)) + for (i in 1:ncol(R)) { + result[i] <- CDaR(R[, i, drop = FALSE], p = p, + geometric = geometric, invert = invert, ... = ...) + } + dim(result) = c(1, NCOL(R)) + colnames(result) = colnames(R) + rownames(result) = paste("Conditional Drawdown ", round(p,3)*100, "%", sep = "") + } + else { + portret <- Return.portfolio(R, weights = weights, + geometric = geometric) + result <- CDaR(portret, p = p, geometric = geometric, + invert = invert, ... = ...) + } + return(result) + } +} Copied: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R (from rev 2790, pkg/PerformanceAnalytics/sandbox/pulkit/week6/DrawdownBeta.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-16 08:53:23 UTC (rev 2794) @@ -0,0 +1,130 @@ +#'@title +#'Drawdown Beta for single path +#' +#'@description +#'The drawdown beta is formulated as follows +#' +#'\deqn{\beta_DD = \frac{{\sum_{t=1}^T}{q_t^\asterisk}{(w_{k^{\asterisk}(t)}-w_t)}}{D_{\alpha}(w^M)}} +#' here \eqn{\beta_DD} is the drawdown beta of the instrument. +#'\eqn{k^{\asterisk}(t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_k^M} +#' +#'\eqn{q_t^\asterisk=1/((1-\alpha)T)} if \eqn{d_t^M} is one of the +#'\eqn{(1-\alpha)T} largest drawdowns \eqn{d_1^{M} ,......d_t^M} of the +#'optimal portfolio and \eqn{q_t^\asterisk = 0} otherwise. It is assumed +#'that \eqn{D_\alpha(w^M) {\neq} 0} and that \eqn{q_t^\asterisk} and +#'\eqn{k^{\asterisk}(t) are uniquely determined for all \eqn{t = 1....T} +#' +#'The numerator in \eqn{\beta_DD} is the average rate of return of the +#'instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest +#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{\asterisk}(t)} +#'is the cumulative rate of return of the instrument from the optimal portfolio#' peak time \eqn{k^\asterisk(t)} to time t. +#' +#'The difference in CDaR and standard betas can be explained by the +#'conceptual difference in beta definitions: the standard beta accounts for +#'the fund returns over the whole return history, including the periods +#'when the market goes up, while CDaR betas focus only on market drawdowns +#'and, thus, are not affected when the market performs well. +#' +#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param Rm Return series of the optimal portfolio an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param p confidence level for calculation ,default(p=0.95) +#'@param weights portfolio weighting vector, default NULL, see Details +#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE +#' @param type The type of BetaDrawdown if specified alpha then the alpha value given is taken (default 0.95). If "average" then +#' alpha = 0 and if "max" then alpha = 1 is taken. +#'@param \dots any passthru variable. +#' +#'@references +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model +#'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University +#'of Florida,September 2012. +#' +#'@examples +#' +#'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 + +BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ + + # DESCRIPTION: + # + # The function is used to find the Drawdown Beta. + # + # INPUT: + # The Return series of the portfolio and the optimal portfolio + # is taken as the input. + # + # OUTPUT: + # The Drawdown beta is given as the output. + + + x = checkData(R) + xm = checkData(Rm) + columnnames = colnames(R) + columns = ncol(R) + drawdowns_m = Drawdowns(Rm) + type = type[1] + if(type=="average"){ + p = 0 + } + if(type == "max"){ + p = 1 + } + if(!is.null(weights)){ + x = Returns.portfolio(R,weights) + } + if(geometric){ + cumul_x = cumprod(x+1)-1 + cumul_xm = cumprod(xm+1)-1 + } + else{ + cumul_x = cumsum(x) + cumul_xm = cumsum(xm) + } + DDbeta<-function(x){ + q = NULL + q_quantile = quantile(drawdowns_m,1-p) + print(drawdowns_m) + for(i in 1:nrow(Rm)){ + + if(drawdowns_m[i]= 0 + for(i in 1:sample){ + for(j in 1:nr){ + r<-rep(0,sample*nr) + r[(i-1)*sample+j] = 1 + f.con = rbind(f.con,r) + } + } + f.dir = c(f.dir,rep(">=",sample*nr)) + f.rhs = c(f.rhs,rep(0,sample*nr)) + + + # constraint 3 : qst =< 1/(1-alpha)*T + for(i in 1:sample){ + for(j in 1:nr){ + r<-rep(0,sample*nr) + r[(i-1)*sample+j] = 1 + f.con = rbind(f.con,r) + } + } + f.dir = c(f.dir,rep("<=",sample*nr)) + f.rhs = c(f.rhs,rep(1/(1-p)*nr,sample*nr)) + + val = lp("max",f.obj,f.con,f.dir,f.rhs) + q = matrix(val$solution,ncol = sample) + # TODO INCORPORATE WEIGHTS + + if(geometric){ + cumul_xm = cumprod(xm+1)-1 + } + else{ + cumul_xm = cumsum(xm) + } + # Function to calculate Drawdown beta for multipath + multiDDbeta<-function(x){ + boolean = (cummax(cumul_xm)==cumul_xm) + index = NULL + for(i in 1:sample){ + for(j in 1:nrow(Rm)){ + if(boolean[j,i] == TRUE){ + index = c(index,j) + b = j + } + else{ + index = c(index,b) + } + } + } + index = matrix(index,ncol = sample) + beta_dd = 0 + for(i in 1:sample){ + beta_dd = beta_dd + sum(ps[i]*q[,i]*(as.numeric(x[index[,i],i])-x[,i])) + } + beta_dd = beta_dd/CdarMultiPath(Rm,ps=ps,p=p,sample = sample) + return(beta_dd) + } + + result = NULL + + for (i in 1:(ncol(R)/sample)) { + ret<-NULL + for(j in 1:sample){ + ret<-cbind(ret,R[,(j-1)*ncol(R)/sample+i]) + } + result <-c(result, multiDDbeta(ret)) + } + result = matrix(result,nrow = 1) + colnames(result) = colnames(R)[1:(ncol(R)/sample)] + #colnames(result) = colnames(R)[1:ncol(R)/sample] + rownames(result) = paste("Conditional Drawdown","(",p*100,"%)",sep="") + return(result) + } + + + + Copied: pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R (from rev 2790, pkg/PerformanceAnalytics/sandbox/pulkit/week6/Drawdownalpha.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-16 08:53:23 UTC (rev 2794) @@ -0,0 +1,84 @@ +#' @title +#' Drawdown alpha +#' +#' @description +#' Then the difference between the actual rate of return and the rate of +#' return of the instrument estimated by \eqn{\beta_DD{w_T}} is called CDaR +#' alpha and is given by +#' +#' \deqn{\alpha_DD = w_T - \beta_DD{w_T^M}} +#' +#' here \eqn{\beta_DD} is the beta drawdown. The code for beta drawdown can +#' be found here \code{BetaDrawdown}. +#' +#' Postive \eqn{\alpha_DD} implies that the instrument did better than it was +#' predicted, and consequently, \eqn{\alpha_DD} can be used as a performance +#' measure to rank instrument and to identify those that outperformed their +#' CAPM predictions +#' +#' +#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param Rm Return series of the optimal portfolio an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param p confidence level for calculation ,default(p=0.95) +#'@param weights portfolio weighting vector, default NULL, see Details +#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE +#' @param type The type of BetaDrawdown if specified alpha then the alpha value given is taken (default 0.95). If "average" then alpha = 0 and if "max" then alpha = 1 is taken. +#'@param \dots any passthru variable +#' +#'@references +#'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model +#'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University +#'of Florida,September 2012. +#' +#'@examples +#' +#'AlphaDrawdown(edhec[,1],edhec[,2]) ## expected value : 0.5141929 +#' +#'AlphaDrawdown(edhec[,1],edhec[,2],type="max") ## expected value : 0.8983177 +#' +#'AlphaDrawdown(edhec[,1],edhec[,2],type="average") ## expected value : 1.692592 +#'@export + + +AlphaDrawdown<-function(R,Rm,p=0.95,weights = NULL,geometric = TRUE,type=c("alpha","average","max"),...){ + # DESCRIPTION: + # This function calculates the drawdown alpha given the return series + # and the optimal return series + # + # INPUTS: + # The return series of the portfolio , the return series of the optimal + # portfolio. The confidence level, the weights and the type of cumulative + # returns. + + # OUTPUT: + # The drawdown alpha is given as the output + + + # TODO ERROR HANDLING + if(ncol(R) != ncol(Rm)){ + stop("The number of columns in R and Rm should be equal") + } + x = checkData(R) + xm = checkData(Rm) + beta = BetaDrawdown(R,Rm,p = p,weights=weights,geometric=geometric,type=type,...) + if(!is.null(weights)){ + x = Returns.portfolio(R,weights) + } + if(geometric){ + cumul_x = cumprod(x+1)-1 + cumul_xm = cumprod(xm+1)-1 + } + else{ + cumul_x = cumsum(x) + cumul_xm = cumsum(xm) + } + x_expected = mean(cumul_x) + xm_expected = mean(cumul_xm) + alpha = x_expected - beta*xm_expected + return(alpha) +} + + + + + Copied: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R (from rev 2790, pkg/PerformanceAnalytics/sandbox/pulkit/week5/EDDCOPS.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-16 08:53:23 UTC (rev 2794) @@ -0,0 +1,86 @@ +#'@title +#'Economic Drawdown Controlled Optimal Portfolio Strategy +#' +#'@description +#'The Economic Drawdown Controlled Optimal Portfolio Strategy(EDD-COPS) has +#'the portfolio fraction allocated to single risky asset as: +#' +#' \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-EDD(t)}{1-EDD(t)}\biggr]\right\}} +#' +#' The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. +#' +#' +#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#'@param delta Drawdown limit +#'@param gamma (1-gamma) is the investor risk aversion +#'else the return series will be used +#'@param Rf risk free rate can be vector such as government security rate of return. +#'@param h Look back period +#'@param geomtric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. +#'@param ... any other variable +#' +#'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to +#'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) +#' +#' +#'@examples +#' +#' # with S&P 500 data and T-bill data +#' +#'dt<-read.zoo("returns.csv",sep=",",header = TRUE) +#'dt<-as.xts(dt) +#'EDDCOPS(dt[,1],delta = 0.33,gamma = 0.7,Rf = (1+dt[,2])^(1/12)-1,geometric = TRUE) +#' +#'data(edhec) +#'EDDCOPS(edhec,delta = 0.1,gamma = 0.7,Rf = 0) +#'@export +#' + +EDDCOPS<-function(R ,delta,gamma,Rf,geometric = TRUE,...){ + # DESCRIPTION + # Calculates the dynamic weights for single and double risky asset portfolios + # using Economic Drawdown + + # INPUT: + # The Return series ,drawdown limit, risk aversion,risk free rate are + # given as the input + + # FUNCTION: + x = checkData(R) + rf = checkData(Rf) + columns = ncol(x) + columnnames = colnames(x) + sharpe = SharpeRatio.annualized(x,Rf) + + sd = StdDev.annualized(R) + dynamicPort<-function(x){ + factor = (sharpe[,column]/sd[,column]+0.5)/(1-delta*gamma) + xt = ifelse(factor*(delta-x)/(1-x)>0,factor*(delta-x)/(1-x),0) + return(xt) + } + + edd = EconomicDrawdown(R,Rf,geometric) [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2794 From noreply at r-forge.r-project.org Fri Aug 16 13:03:48 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 13:03:48 +0200 (CEST) Subject: [Returnanalytics-commits] r2795 - pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1 Message-ID: <20130816110348.DC21D184B6F@r-forge.r-project.org> Author: shubhanm Date: 2013-08-16 13:03:46 +0200 (Fri, 16 Aug 2013) New Revision: 2795 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/GLMSmoothIndex.R Log: Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/GLMSmoothIndex.R 2013-08-16 08:53:23 UTC (rev 2794) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/GLMSmoothIndex.R 2013-08-16 11:03:46 UTC (rev 2795) @@ -1,11 +1,3 @@ -#This measure is well known in the -#industrial organization literature as the Herfindahl index, a measure of the -#concentration of firms in a given industry where yj represents the market share of -#firm j: Because yjA?0; 1 ; x is also confined to the unit interval, and is minimized when -#all the yj's are identical, which implies a value of 1=?k ? 1? for x; and is maximized -#when one coefficient is 1 and the rest are 0, in which case x ? 1: In the context of -##smoothed returns, a lower value of x implies more smoothing, and the upper bound -#of 1 implies no smoothing, hence we shall refer to x as a ''smoothingindex' '. GLMSmoothIndex<- function(R = NULL, ...) { From noreply at r-forge.r-project.org Fri Aug 16 13:08:27 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 13:08:27 +0200 (CEST) Subject: [Returnanalytics-commits] r2796 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . R Shubhankit Week1/Code man Message-ID: <20130816110827.46349184B6F@r-forge.r-project.org> Author: shubhanm Date: 2013-08-16 13:08:26 +0200 (Fri, 16 Aug 2013) New Revision: 2796 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDDopt.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/maxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/na.skip.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Shubhankit.Rproj pkg/PerformanceAnalytics/sandbox/Shubhankit/Shubhankit/ pkg/PerformanceAnalytics/sandbox/Shubhankit/Shubhankit/inst/ pkg/PerformanceAnalytics/sandbox/Shubhankit/Shubhankit/man/ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.Autocorrelation.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.ComparitiveReturn.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.NormDD.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.UnsmoothReturn.Rd Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R Log: /man .Rd files Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,53 @@ +Package: Shubhankit +Type: Package +Title: Econometric tools for performance and risk analysis. +Version: 1.1.0 +Date: $Date: 2013-01-29 21:04:00 +0800 (Tue, 29 Jan 2013) $ +Author: Peter Carl, Brian G. Peterson +Maintainer: Brian G. Peterson +Description: Collection of econometric functions for + performance and risk analysis. This package aims to aid + practitioners and researchers in utilizing the latest + research in analysis of non-normal return streams. In + general, it is most tested on return (rather than + price) data on a regular scale, but most functions will + work with irregular return data as well, and increasing + numbers of functions will work with P&L or price data + where possible. +Depends: + R (>= 2.14.0), + zoo, + xts (>= 0.8-9) +Suggests: + Hmisc, + MASS, + tseries, + quadprog, + sn, + robustbase, + quantreg, + gplots, + ff +License: GPL +URL: http://r-forge.r-project.org/projects/returnanalytics/ +Copyright: (c) 2004-2012 +Contributors: Kris Boudt, Diethelm Wuertz, Eric Zivot, Matthieu Lestel +Thanks: A special thanks for additional contributions from + Stefan Albrecht, Khahn Nygyen, Jeff Ryan, + Josh Ulrich, Sankalp Upadhyay, Tobias Verbeke, + H. Felix Wittmann, Ram Ahluwalia +Collate: + 'GLMSmoothIndex.R' + 'chart.Autocorrelation.R' + 'ACStdDev.annualized.R' + 'CalmarRatio.Normalized.R' + 'na.skip.R' + 'Return.GLM.R' + 'table.ComparitiveReturn.GLM.R' + 'table.UnsmoothReturn.R' + 'UnsmoothReturn.R' + 'EmaxDDGBM.R' + 'maxDDGBM.R' + 'table.normDD.R' + 'CDDopt.R' + 'CDrawdown.R' Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,12 @@ +export(ACStdDev.annualized) +export(CDrawdown) +export(chart.Autocorrelation) +export(EMaxDDGBM) +export(GLMSmoothIndex) +export(QP.Norm) +export(Return.GLM) +export(SterlingRatio.Normalized) +export(table.ComparitiveReturn.GLM) +export(table.EMaxDDGBM) +export(table.NormDD) +export(table.UnsmoothReturn) Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,77 @@ +#' calculate a multiperiod or annualized Autocorrleation adjusted Standard Deviation +#' +#' @aliases sd.multiperiod sd.annualized StdDev.annualized +#' @param x an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @param lag : number of autocorrelated lag factors inputted by user +#' @param scale number of periods in a year (daily scale = 252, monthly scale = +#' 12, quarterly scale = 4) +#' @param \dots any other passthru parameters +#' @author R +#' @seealso \code{\link[stats]{sd}} \cr +#' \url{http://wikipedia.org/wiki/inverse-square_law} +#' @references Burghardt, G., and L. Liu, \emph{ It's the Autocorrelation, Stupid (November 2012) Newedge +#' working paper.http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf \cr +#' @keywords ts multivariate distribution models +#' @examples +#' +#' data(edhec) +#' ACsd.annualized(edhec,3) + +#' +#' @export +#' @rdname ACStdDev.annualized +ACStdDev.annualized <- ACsd.annualized <- ACsd.multiperiod <- + function (R,lag=6, scale = NA, ...) + { + columns.a = ncol(R) + columnnames.a = colnames(R) + if(is.na(scale) && !xtsible(R)) + stop("'x' needs to be timeBased or xtsible, or scale must be specified." ) + + if(is.na(scale)) { + freq = periodicity(R) + switch(freq$scale, + #kChec + minute = {stop("Data periodicity too high")}, + hourly = {stop("Data periodicity too high")}, + daily = {scale = 252}, + weekly = {scale = 52}, + monthly = {scale = 12}, + quarterly = {scale = 4}, + yearly = {scale = 1} + ) + } + + for(column.a in 1:columns.a) { # for each asset passed in as R + # clean the data and get rid of NAs + column.return = R[,column.a] + acf = as.numeric(acf(as.numeric(column.return), plot = FALSE)[1:lag][[1]]) + coef= sum(acf*acf) + if(!xtsible(R) & is.na(scale)) + { + stop("'x' needs to be timeBased or xtsible, or scale must be specified." ) + } + else + { + if(column.a == 1) { result = as.numeric(StdDev.annualized(column.return))*(1+2*coef) } + else { result = cbind (result, as.numeric(StdDev.annualized(column.return))*(1+2*coef)) } + } + } + dim(result) = c(1,NCOL(R)) + colnames(result) = colnames(R) + rownames(result) = "Autocorrelated Annualized Standard Deviation" + return(result) + } + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2013 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: ACStdDev.annualized.R +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDDopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDDopt.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDDopt.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,24 @@ +cDDOpt = function(rmat, alpha=0.05, rmin=0, wmin=0, wmax=1, weight.sum=1) +{ + require(Rglpk) + n = ncol(rmat) # number of assets + s = nrow(rmat) # number of scenarios i.e. periods + averet = colMeans(rmat) + # creat objective vector, constraint matrix, constraint rhs + Amat = rbind(cbind(rbind(1,averet),matrix(data=0,nrow=2,ncol=s+1)), + cbind(rmat,diag(s),1)) + objL = c(rep(0,n), as.numeric(Cdrawdown(rmat,.9)), -1) + bvec = c(weight.sum,rmin,rep(0,s)) + # direction vector + dir.vec = c("==",">=",rep(">=",s)) + # bounds on weights + bounds = list(lower = list(ind = 1:n, val = rep(wmin,n)), + upper = list(ind = 1:n, val = rep(wmax,n))) + res = Rglpk_solve_LP(obj=objL, mat=Amat, dir=dir.vec, rhs=bvec, + types=rep("C",length(objL)), max=T, bounds=bounds) + w = as.numeric(res$solution[1:n]) + return(list(w=w,status=res$status)) +} +#' Guy Yollin work +#' +#' Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,73 @@ +#' Chekhlov Conditional Drawdown at Risk +#' +#' A new one-parameter family of risk measures called Conditional Drawdown (CDD) has +#'been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance +#' parameter, in the case of a single sample path, drawdown functional is de???ned as +#'the mean of the worst 100% drawdowns. The CDD measure generalizes the +#'notion of the drawdown functional to a multi-scenario case and can be considered as a +#'generalization of deviation measure to a dynamic case. The CDD measure includes the +#'Maximal Drawdown and Average Drawdown as its limiting cases. +#' +#' The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD +#' The model develops concept of drawdown measure by generalizing the notion +#' of the CDD to the case of several sample paths for portfolio uncompounded rate +#' of return. +#' @param Ra return vector of the portfolio +#' @param p confidence interval +#' @author R Project +#' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} +#' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 +#' @keywords Conditional Drawdown models +#' @examples +#' +#' library(PerformanceAnalytics) +#' data(edhec) +#' CDrawdown(edhec) +#' @rdname Cdrawdown +#' @export + +CDrawdown <- + function (R,p=0.90, ...) + { + y = checkData(R, method = "xts") + columns = ncol(y) + rows = nrow(y) + columnnames = colnames(y) + rownames = rownames(y) + + for(column in 1:columns) { + x = y[,column] + drawdown = findDrawdowns(x) + threshold= ES(x,p)[1] + total = length(drawdown$return) + num = length(drawdown$return[drawdown$return>threshold]) + cva1= (((num/total)-p)/(1-p))*threshold + cva2=sum(drawdown$return)/((1-p)*total) + z = c((cva1+cva2)) + znames = c("Conditional Drawdown at Risk") + if(column == 1) { + resultingtable = data.frame(Value = z, row.names = znames) + } + else { + nextcolumn = data.frame(Value = z, row.names = znames) + resultingtable = cbind(resultingtable, nextcolumn) + } + + } + colnames(resultingtable) = columnnames + #ans = base::round(resultingtable, digits) + #ans + resultingtable + } + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: CDrawdown.R 2271 2012-09-02 01:56:23Z braverock $ +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,137 @@ +#' calculate a Normalized Calmar or Sterling reward/risk ratio +#' +#' Normalized Calmar and Sterling Ratios are yet another method of creating a +#' risk-adjusted measure for ranking investments similar to the +#' \code{\link{SharpeRatio}}. +#' +#' Both the Normalized Calmar and the Sterling ratio are the ratio of annualized return +#' over the absolute value of the maximum drawdown of an investment. The +#' Sterling ratio adds an excess risk measure to the maximum drawdown, +#' traditionally and defaulting to 10\%. +#' +#' It is also traditional to use a three year return series for these +#' calculations, although the functions included here make no effort to +#' determine the length of your series. If you want to use a subset of your +#' series, you'll need to truncate or subset the input data to the desired +#' length. +#' +#' Many other measures have been proposed to do similar reward to risk ranking. +#' It is the opinion of this author that newer measures such as Sortino's +#' \code{\link{UpsidePotentialRatio}} or Favre's modified +#' \code{\link{SharpeRatio}} are both \dQuote{better} measures, and +#' should be preferred to the Calmar or Sterling Ratio. +#' +#' @aliases Normalized.CalmarRatio Normalized.SterlingRatio +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @param scale number of periods in a year (daily scale = 252, monthly scale = +#' 12, quarterly scale = 4) +#' @param excess for Sterling Ratio, excess amount to add to the max drawdown, +#' traditionally and default .1 (10\%) +#' @author Brian G. Peterson +#' @seealso +#' \code{\link{Return.annualized}}, \cr +#' \code{\link{maxDrawdown}}, \cr +#' \code{\link{SharpeRatio.modified}}, \cr +#' \code{\link{UpsidePotentialRatio}} +#' @references Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct 2004. +#' @keywords ts multivariate distribution models +#' @examples +#' +#' data(managers) +#' Normalized.CalmarRatio(managers[,1,drop=FALSE]) +#' Normalized.CalmarRatio(managers[,1:6]) +#' Normalized.SterlingRatio(managers[,1,drop=FALSE]) +#' Normalized.SterlingRatio(managers[,1:6]) +#' +#' @export +#' @rdname CalmarRatio +#' QP function fo calculation of Sharpe Ratio +QP.Norm <- function (R, tau,scale = NA) +{ + Sharpe= as.numeric(SharpeRatio.annualized(edhec)) +return(.63519+(.5*log(tau))+log(Sharpe)) +} + +CalmarRatio.Normalized <- function (R, tau = 1,scale = NA) +{ # @author Brian G. Peterson + + # DESCRIPTION: + # Inputs: + # Ra: in this case, the function anticipates having a return stream as input, + # rather than prices. + # tau : scaled Time in Years + # scale: number of periods per year + # Outputs: + # This function returns a Calmar Ratio + + # FUNCTION: + + R = checkData(R) + if(is.na(scale)) { + freq = periodicity(R) + switch(freq$scale, + minute = {stop("Data periodicity too high")}, + hourly = {stop("Data periodicity too high")}, + daily = {scale = 252}, + weekly = {scale = 52}, + monthly = {scale = 12}, + quarterly = {scale = 4}, + yearly = {scale = 1} + ) + } + Time = nyears(R) + annualized_return = Return.annualized(R, scale=scale) + drawdown = abs(maxDrawdown(R)) + result = (annualized_return/drawdown)*(QP.Norm(R,Time)/QP.Norm(R,tau))*(tau/Time) + rownames(result) = "Normalized Calmar Ratio" + return(result) +} + +#' @export +#' @rdname CalmarRatio +SterlingRatio.Normalized <- + function (R, tau=1,scale=NA, excess=.1) + { # @author Brian G. Peterson + + # DESCRIPTION: + # Inputs: + # Ra: in this case, the function anticipates having a return stream as input, + # rather than prices. + # scale: number of periods per year + # Outputs: + # This function returns a Sterling Ratio + + # FUNCTION: + Time = nyears(R) + R = checkData(R) + if(is.na(scale)) { + freq = periodicity(R) + switch(freq$scale, + minute = {stop("Data periodicity too high")}, + hourly = {stop("Data periodicity too high")}, + daily = {scale = 252}, + weekly = {scale = 52}, + monthly = {scale = 12}, + quarterly = {scale = 4}, + yearly = {scale = 1} + ) + } + annualized_return = Return.annualized(R, scale=scale) + drawdown = abs(maxDrawdown(R)+excess) + result = annualized_return/drawdown*(QP.Norm(R,Time)/QP.Norm(R,tau))*(tau/Time) + rownames(result) = paste("Normalized Sterling Ratio (Excess = ", round(excess*100,0), "%)", sep="") + return(result) + } + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2013 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: CalmarRatioNormalized.R 1955 2012-05-23 16:38:16Z braverock $ +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,195 @@ +#' Expected Drawdown using Brownian Motion Assumptions +#' +#' Works on the model specified by Maddon-Ismail +#' +#' +#' +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns + +#' @author R +#' @keywords Expected Drawdown Using Brownian Motion Assumptions +#' @rdname EmaxDDGBM +#' @export +table.EMaxDDGBM <- + function (R,digits =4) + {# @author + + # DESCRIPTION: + # Downside Risk Summary: Statistics and Stylized Facts + + # Inputs: + # R: a regular timeseries of returns (rather than prices) + # Output: Table of Estimated Drawdowns + + y = checkData(R, method = "xts") + columns = ncol(y) + rows = nrow(y) + columnnames = colnames(y) + rownames = rownames(y) + T= nyears(y); + + # for each column, do the following: + for(column in 1:columns) { + x = y[,column] + mu = Return.annualized(x, scale = NA, geometric = TRUE) + sig=StdDev(x) + gamma<-sqrt(pi/8) + + if(mu==0){ + + Ed<-2*gamma*sig*sqrt(T) + + } + + else{ + + alpha<-mu*sqrt(T/(2*sig^2)) + + x<-alpha^2 + + if(mu>0){ + + mQp<-matrix(c( + + 0.0005, 0.0010, 0.0015, 0.0020, 0.0025, 0.0050, 0.0075, 0.0100, 0.0125, + + 0.0150, 0.0175, 0.0200, 0.0225, 0.0250, 0.0275, 0.0300, 0.0325, 0.0350, + + 0.0375, 0.0400, 0.0425, 0.0450, 0.0500, 0.0600, 0.0700, 0.0800, 0.0900, + + 0.1000, 0.2000, 0.3000, 0.4000, 0.5000, 1.5000, 2.5000, 3.5000, 4.5000, + + 10, 20, 30, 40, 50, 150, 250, 350, 450, 1000, 2000, 3000, 4000, 5000, 0.019690, + + 0.027694, 0.033789, 0.038896, 0.043372, 0.060721, 0.073808, 0.084693, 0.094171, + + 0.102651, 0.110375, 0.117503, 0.124142, 0.130374, 0.136259, 0.141842, 0.147162, + + 0.152249, 0.157127, 0.161817, 0.166337, 0.170702, 0.179015, 0.194248, 0.207999, + + 0.220581, 0.232212, 0.243050, 0.325071, 0.382016, 0.426452, 0.463159, 0.668992, + + 0.775976, 0.849298, 0.905305, 1.088998, 1.253794, 1.351794, 1.421860, 1.476457, + + 1.747485, 1.874323, 1.958037, 2.020630, 2.219765, 2.392826, 2.494109, 2.565985, + + 2.621743),ncol=2) + + + + if(x<0.0005){ + + Qp<-gamma*sqrt(2*x) + + } + + if(x>0.0005 & x<5000){ + + Qp<-spline(log(mQp[,1]),mQp[,2],n=1,xmin=log(x),xmax=log(x))$y + + } + + if(x>5000){ + + Qp<-0.25*log(x)+0.49088 + + } + + Ed<-(2*sig^2/mu)*Qp + + } + + if(mu<0){ + + mQn<-matrix(c( + + 0.0005, 0.0010, 0.0015, 0.0020, 0.0025, 0.0050, 0.0075, 0.0100, 0.0125, 0.0150, + + 0.0175, 0.0200, 0.0225, 0.0250, 0.0275, 0.0300, 0.0325, 0.0350, 0.0375, 0.0400, + + 0.0425, 0.0450, 0.0475, 0.0500, 0.0550, 0.0600, 0.0650, 0.0700, 0.0750, 0.0800, + + 0.0850, 0.0900, 0.0950, 0.1000, 0.1500, 0.2000, 0.2500, 0.3000, 0.3500, 0.4000, + + 0.5000, 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000, 4.5000, 5.0000, + + 0.019965, 0.028394, 0.034874, 0.040369, 0.045256, 0.064633, 0.079746, 0.092708, + + 0.104259, 0.114814, 0.124608, 0.133772, 0.142429, 0.150739, 0.158565, 0.166229, + + 0.173756, 0.180793, 0.187739, 0.194489, 0.201094, 0.207572, 0.213877, 0.220056, + + 0.231797, 0.243374, 0.254585, 0.265472, 0.276070, 0.286406, 0.296507, 0.306393, + + 0.316066, 0.325586, 0.413136, 0.491599, 0.564333, 0.633007, 0.698849, 0.762455, + + 0.884593, 1.445520, 1.970740, 2.483960, 2.990940, 3.492520, 3.995190, 4.492380, + + 4.990430, 5.498820),ncol=2) + + + + + + if(x<0.0005){ + + Qn<-gamma*sqrt(2*x) + + } + + if(x>0.0005 & x<5000){ + + Qn<-spline(mQn[,1],mQn[,2],n=1,xmin=x,xmax=x)$y + + } + + if(x>5000){ + + Qn<-x+0.50 + + } + + Ed<-(2*sig^2/mu)*(-Qn) + + } + + } + + # return(Ed) + + z = c((mu*100), + (sig*100), + (Ed*100)) + znames = c( + "Annual Returns in %", + "Std Devetions in %", + "Expected Drawdown in %" + ) + if(column == 1) { + resultingtable = data.frame(Value = z, row.names = znames) + } + else { + nextcolumn = data.frame(Value = z, row.names = znames) + resultingtable = cbind(resultingtable, nextcolumn) + } + } + colnames(resultingtable) = columnnames + ans = base::round(resultingtable, digits) + ans + + + } + +############################################################################### +################################################################################ +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: EmaxDDGBM.R 2271 2012-09-02 01:56:23Z braverock $ +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,75 @@ +#'@title Getmansky Lo Markov Smoothing Index Parameter +#'@description +#'A useful summary statistic for measuring the concentration of weights is +#' a sum of square of Moving Average lag coefficient. +#' This measure is well known in the industrial organization literature as the +#' Herfindahl index, a measure of the concentration of firms in a given industry. +#' The index is maximized when one coefficient is 1 and the rest are 0, in which case x ? 1: In the context of +#'smoothed returns, a lower value of x implies more smoothing, and the upper bound +#'of 1 implies no smoothing, hence x is reffered as a ''smoothingindex' '. +#' +#' \deqn{ R_t = {\mu} + {\beta}{{\delta}}_t+ \xi_t} +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @author R +#' @aliases Return.Geltner +#' @references "An econometric model of serial correlation and illiquidity in +#' hedge fund returns" Mila Getmansky1, Andrew W. Lo*, Igor Makarov +#' +#' @keywords ts multivariate distribution models non-iid +#' @examples +#' +#' data(edhec) +#' head(GLMSmoothIndex(edhec)) +#' +#' @export +GLMSmoothIndex<- + function(R = NULL, ...) + { + columns = 1 + columnnames = NULL + #Error handling if R is not NULL + if(!is.null(R)){ + x = checkData(R) + columns = ncol(x) + n = nrow(x) + count = q + x=edhec + columns = ncol(x) + columnnames = colnames(x) + + # Calculate AutoCorrelation Coefficient + for(column in 1:columns) { # for each asset passed in as R + y = checkData(x[,column], method="vector", na.rm = TRUE) + sum = sum(abs(acf(y,plot=FALSE,lag.max=6)[[1]][2:7])); + acflag6 = acf(y,plot=FALSE,lag.max=6)[[1]][2:7]/sum; + values = sum(acflag6*acflag6) + + if(column == 1) { + result.df = data.frame(Value = values) + colnames(result.df) = columnnames[column] + } + else { + nextcol = data.frame(Value = values) + colnames(nextcol) = columnnames[column] + result.df = cbind(result.df, nextcol) + } + } + rownames(result.df)= paste("GLM Smooth Index") + + return(result.df) + + } + } + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: GLMSmoothIndex.R 2163 2012-07-16 00:30:19Z braverock $ +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,87 @@ +#' Getmansky Lo Markov Unsmooth Return Model +#' +#' +#' True returns represent the flow of information that would determine the equilibrium +#' value of the fund's securities in a frictionless market. However, true economic +#' returns are not observed. Instead, Rot +#' denotes the reported or observed return in +#' period t, which is a weighted average of the fund's true returns over the most recent k ? 1 +#' periods, includingthe current period. +#' This averaging process captures the essence of smoothed returns in several +#' respects. From the perspective of illiquidity-driven smoothing, is consistent +#' with several models in the nonsynchronous tradingliterat ure. For example, Cohen +#' et al. (1 986, Chapter 6.1) propose a similar weighted-average model for observed +#' returns. +#' +#' The Geltner autocorrelation adjusted return series may be calculated via: +#' +#' @param Ra an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns + +#' @param q order of autocorrelation coefficient +#' @author R +#' @references "An econometric model of serial correlation and +#' illiquidity in hedge fund returns +#' Mila Getmansky1, Andrew W. Lo*, Igor Makarov +#' MIT Sloan School of Management, 50 Memorial Drive, E52-432, Cambridge, MA 02142-1347, USA +#' Received 16 October 2002; received in revised form 7 March 2003; accepted 15 May 2003 +#' Available online 10 July 2004 +#' +#' +#' @keywords ts multivariate distribution models +#' @examples +#' +#' data(edhec) +#' Return.GLM(edhec,4) +#' +#' @export +Return.GLM <- + function (Ra,q=3) + { # @author Brian G. Peterson, Peter Carl + + # Description: + + # Ra return vector + # q Lag Factors + # Function: + library(tseries) + library(PerformanceAnalytics) + R = checkData(Ra, method="xts") + # Get dimensions and labels + columns.a = ncol(R) + columnnames.a = colnames(R) + + clean.GLM <- function(column.R,q=3) { + ma.coeff = as.numeric((arma(edhec[,1],order=c(0,q)))$coef[1:q]) + column.glm = ma.coeff[q]*lag(column.R,q) + + return(column.glm) + } + + for(column.a in 1:columns.a) { # for each asset passed in as R + # clean the data and get rid of NAs + column.glma = na.skip(R[,column.a],clean.GLM) + + if(column.a == 1) { glm = column.glma } + else { glm = cbind (glm, column.glma) } + + } + + colnames(glm) = columnnames.a + + # RESULTS: + return(reclass(glm,match.to=Ra)) + + } + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: Return.GLM.R 2163 2012-07-16 00:30:19Z braverock $ +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/UnsmoothReturn.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/UnsmoothReturn.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/UnsmoothReturn.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,36 @@ +UnSmoothReturn<- + function(R = NULL,q, ...) + { + columns = 1 + columnnames = NULL + #Error handling if R is not NULL + if(!is.null(R)){ + x = checkData(R) + columns = ncol(x) + n = nrow(x) + count = q + x=edhec + columns = ncol(x) + columnnames = colnames(x) + + # Calculate AutoCorrelation Coefficient + for(column in 1:columns) { # for each asset passed in as R + y = checkData(edhec[,column], method="vector", na.rm = TRUE) + + acflag6 = acf(y,plot=FALSE,lag.max=6)[[1]][2:7] + values = sum(acflag6*acflag6)/(sum(acflag6)*sum(acflag6)) + + if(column == 1) { + result.df = data.frame(Value = values) + colnames(result.df) = columnnames[column] + } + else { + nextcol = data.frame(Value = values) + colnames(nextcol) = columnnames[column] + result.df = cbind(result.df, nextcol) + } + } + return(result.df[1:q,]*R) # Unsmooth Return + + } + } \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,58 @@ +#' Stacked Bar Plot of Autocorrelation Lag Coefficients +#' +#' A wrapper to create box and whiskers plot of comparitive inputs +#' +#' We have also provided controls for all the symbols and lines in the chart. +#' One default, set by \code{as.Tufte=TRUE}, will strip chartjunk and draw a +#' Boxplot per recommendations by Burghardt, Duncan and Liu(2013) +#' +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' an asset return +#' @return Stack Bar plot of lagged return coefficients +#' @author R +#' @seealso \code{\link[graphics]{boxplot}} +#' @references Burghardt, Duncan and Liu(2013) \emph{It's the autocorrelation, stupid}. AlternativeEdge Note November, 2012 } +#' @keywords Autocorrelation lag factors +#' @examples +#' +#' data(edhec[,1]) +#' chart.Autocorrelation(edhec[,1]) +#' +#' @rdname chart.Autocorrelation +#' @export +chart.Autocorrelation <- + function (R, ...) + { # @author R + + # DESCRIPTION: + # A wrapper to create box and whiskers plot, of autocorrelation lag coeffiecients + # of the First six factors + + R = checkData(R, method="xts") + +# Graph autos with adjacent bars using rainbow colors + +aa= table.Autocorrelation(R) + barplot(as.matrix(aa), main="ACF Lag Plot", ylab= "Value of Coefficient", + , xlab = NULL,col=rainbow(6)) + + # Place the legend at the top-left corner with no frame + # using rainbow colors + legend("topright", c("1","2","3","4","5","6"), cex=0.6, + bty="n", fill=rainbow(6)); + + + + +} +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: Chart.Autocorrelation.R 2271 2012-09-02 01:56:23Z braverock $ +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/maxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/maxDDGBM.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/maxDDGBM.R 2013-08-16 11:08:26 UTC (rev 2796) @@ -0,0 +1,174 @@ +#' Expected Drawdown using Brownian Motion Assumptions +#' +#' Works on the model specified by Maddon-Ismail +#' +#' +#' +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns + +#' @author R +#' @keywords Expected Drawdown Using Brownian Motion Assumptions +#' +#' @export +EMaxDDGBM <- + function (R,digits =4) + {# @author + + # DESCRIPTION: + # Downside Risk Summary: Statistics and Stylized Facts [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2796 From noreply at r-forge.r-project.org Fri Aug 16 18:27:25 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 18:27:25 +0200 (CEST) Subject: [Returnanalytics-commits] r2797 - in pkg/PortfolioAnalytics: R man Message-ID: <20130816162725.F194B183E5C@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-16 18:27:25 +0200 (Fri, 16 Aug 2013) New Revision: 2797 Modified: pkg/PortfolioAnalytics/R/portfolio.R pkg/PortfolioAnalytics/man/portfolio.spec.Rd Log: Adding documentation for portfolio.spec Modified: pkg/PortfolioAnalytics/R/portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/portfolio.R 2013-08-16 11:08:26 UTC (rev 2796) +++ pkg/PortfolioAnalytics/R/portfolio.R 2013-08-16 16:27:25 UTC (rev 2797) @@ -12,12 +12,30 @@ #' constructor for class portfolio #' +#' The portfolio object is created with \code{portfolio.spec}. The portfolio +#' object is an S3 object of class 'portfolio' used to hold the seed assets, +#' constraints, objectives, and other information about the portfolio. The only +#' required argument to \code{portfolio.spec} is \code{assets}. +#' +#' The portfolio object contains the following elements: +#' \itemize{ +#' \item{\code{assets}}{ named vector of the seed weights} +#' \item{\code{category_labels}}{ character vector to categorize the assets by sector, geography, etc.} +#' \item{\code{weight_seq}}{ sequence of weights used by \code{\link{random_portfolios}}. See \code{\link{generatesequence}}} +#' \item{\code{constraints}}{ a list of constraints added to the portfolio object with \code{\link{add.constraint}}} +#' \item{\code{objectives}}{ a list of objectives added to the portfolio object with \code{\link{add.objective}}} +#' \item{\code{call}}{ the call to \code{portfolio.spec} with all of the specified arguments} +#' } +#' #' @param assets number of assets, or optionally a named vector of assets specifying seed weights. If seed weights are not specified, an equal weight portfolio will be assumed. -#' @param category_labels character vector to categorize assets by sector, industry, geography, market-cap, currency, etc. -#' @param weight_seq seed sequence of weights, see \code{\link{generatesequence}} +#' @param category_labels character vector to categorize assets by sector, industry, geography, market-cap, currency, etc. Default NULL +#' @param weight_seq seed sequence of weights, see \code{\link{generatesequence}} Default NULL #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. +#' @return an object of class \code{portfolio} #' @author Ross Bennett #' @examples +#' data(edhec) +#' pspec <- portfolio.spec(assets=colnames(edhec)) #' pspec <- portfolio.spec(assets=10, weight_seq=generatesequence()) #' @export portfolio.spec <- function(assets=NULL, category_labels=NULL, weight_seq=NULL, message=FALSE) { Modified: pkg/PortfolioAnalytics/man/portfolio.spec.Rd =================================================================== --- pkg/PortfolioAnalytics/man/portfolio.spec.Rd 2013-08-16 11:08:26 UTC (rev 2796) +++ pkg/PortfolioAnalytics/man/portfolio.spec.Rd 2013-08-16 16:27:25 UTC (rev 2797) @@ -13,18 +13,43 @@ \item{category_labels}{character vector to categorize assets by sector, industry, geography, market-cap, - currency, etc.} + currency, etc. Default NULL} \item{weight_seq}{seed sequence of weights, see - \code{\link{generatesequence}}} + \code{\link{generatesequence}} Default NULL} \item{message}{TRUE/FALSE. The default is message=FALSE. Display messages if TRUE.} } +\value{ + an object of class \code{portfolio} +} \description{ - constructor for class portfolio + The portfolio object is created with + \code{portfolio.spec}. The portfolio object is an S3 + object of class 'portfolio' used to hold the seed assets, + constraints, objectives, and other information about the + portfolio. The only required argument to + \code{portfolio.spec} is \code{assets}. } +\details{ + The portfolio object contains the following elements: + \itemize{ \item{\code{assets}}{ named vector of the seed + weights} \item{\code{category_labels}}{ character vector + to categorize the assets by sector, geography, etc.} + \item{\code{weight_seq}}{ sequence of weights used by + \code{\link{random_portfolios}}. See + \code{\link{generatesequence}}} + \item{\code{constraints}}{ a list of constraints added to + the portfolio object with \code{\link{add.constraint}}} + \item{\code{objectives}}{ a list of objectives added to + the portfolio object with \code{\link{add.objective}}} + \item{\code{call}}{ the call to \code{portfolio.spec} + with all of the specified arguments} } +} \examples{ +data(edhec) +pspec <- portfolio.spec(assets=colnames(edhec)) pspec <- portfolio.spec(assets=10, weight_seq=generatesequence()) } \author{ From noreply at r-forge.r-project.org Fri Aug 16 18:31:25 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 18:31:25 +0200 (CEST) Subject: [Returnanalytics-commits] r2798 - in pkg/PortfolioAnalytics: . sandbox vignettes Message-ID: <20130816163126.011F1183E5C@r-forge.r-project.org> Author: braverock Date: 2013-08-16 18:31:25 +0200 (Fri, 16 Aug 2013) New Revision: 2798 Added: pkg/PortfolioAnalytics/vignettes/ pkg/PortfolioAnalytics/vignettes/DesignThoughts.Rnw pkg/PortfolioAnalytics/vignettes/PA.bib pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw Removed: pkg/PortfolioAnalytics/inst/ pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw Log: - take out old inst/doc dir, move to vignettes as is the current preference of R CMD check - move new portfolio_vignette to vignettes dir from sandbox Deleted: pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw 2013-08-16 16:27:25 UTC (rev 2797) +++ pkg/PortfolioAnalytics/sandbox/portfolio_vignette.Rnw 2013-08-16 16:31:25 UTC (rev 2798) @@ -1,211 +0,0 @@ -\documentclass[12pt,letterpaper,english]{article} -\usepackage[OT1]{fontenc} -\usepackage{Sweave} -\usepackage{verbatim} -\usepackage{Rd} -\usepackage{Sweave} - -\begin{document} - -\title{Creating a Portfolio Object with PortfolioAnalytics} -\author{Ross Bennett} - -\maketitle - -\begin{abstract} -The purpose of this vignette is to demonstrate the new interface in PortfolioAnalytics to specify a portfolio object and to add constraints and objectives. -\end{abstract} - -\tableofcontents - -\section{Getting Started} -\subsection{Load Packages} -Load the necessary packages. - -<<>>= -library(PortfolioAnalytics) -@ - -\subsection{Data} -The edhec data set from the PerformanceAnalytics package will be used as example data. -<<>>= -data(edhec) - -# Use the first 4 columns in edhec for a returns object -returns <- edhec[, 1:4] -print(head(returns, 5)) - -# Get a character vector of the fund names -fund.names <- colnames(returns) -@ - -\section{Creating the "portfolio" object} -The portfolio object is instantiated with the \code{portfolio.spec} function. The main argument to \code{portfolio.spec} is assets, this is a required argument. The assets argument can be a scalar value for the number of assets, a character vector of fund names, or a named vector of seed weights. If seed weights are not specified, an equal weight portfolio will be assumed. - -The \code{pspec} object is an S3 object of class "portfolio". When first created, the portfolio object has an element named \code{assets} with the seed weights, an element named \code{category\_labels}, an element named \code{weight\_seq} with a seed sequence of weights if specified, an empty constraints list and an empty objectives list. - -<<>>= -# Specify a portfolio object by passing a character vector for the -# assets argument. -pspec <- portfolio.spec(assets=fund.names) -print.default(pspec) -@ - -\section{Adding Constraints to the Portfolio Object} -Adding constraints to the portfolio object is done with \code{add.constraint}. The \code{add.constraint} function is the main interface for adding and/or updating constraints to the portfolio object. This function allows the user to specify the portfolio to add the constraints to, the type of constraints, arguments for the constraint, and whether or not to enable the constraint (\code{enabled=TRUE} is the default). If updating an existing constraint, the indexnum argument can be specified. - -Here we add a constraint that the weights must sum to 1, or the full investment constraint. -<<>>= -# Add the full investment constraint that specifies the weights must sum to 1. -pspec <- add.constraint(portfolio=pspec, - type="weight_sum", - min_sum=1, - max_sum=1) - -# The full investment constraint can also be specified with type="full_investment" -# pspec <- add.constraint(portfolio=pspec, type="full_investment") - -# Another common constraint is that portfolio weights sum to 0. -# This can be specified any of the following ways -# pspec <- add.constraint(portfolio=pspec, type="weight_sum", -# min_sum=0, -# max_sum=0) -# pspec <- add.constraint(portfolio=pspec, type="dollar_neutral") -# pspec <- add.constraint(portfolio=pspec, type="active") -@ - -Here we add box constraints for the asset weights so that the minimum weight of any asset must be greater than or equal to 0.05 and the maximum weight of any asset must be less than or equal to 0.4. The values for min and max can be passed in as scalars or vectors. If min and max are scalars, the values for min and max will be replicated as vectors to the length of assets. If min and max are not specified, a minimum weight of 0 and maximum weight of 1 are assumed. Note that min and max can be specified as vectors with different weights for linear inequality constraints. -<<>>= -# Add box constraints -pspec <- add.constraint(portfolio=pspec, - type="box", - min=0.05, - max=0.4) - -# min and max can also be specified per asset -# pspec <- add.constraint(portfolio=pspec, -# type="box", -# min=c(0.05, 0, 0.08, 0.1), -# max=c(0.4, 0.3, 0.7, 0.55)) - -# A special case of box constraints is long only where min=0 and max=1 -# The default action is long only if min and max are not specified -# pspec <- add.constraint(portfolio=pspec, type="box") -# pspec <- add.constraint(portfolio=pspec, type="long_only") -@ - -The portfolio object now has 2 objects in the constraints list. One object for the sum of weights constraint and another for the box constraint. -<<>>= -print(pspec) -@ - -The \code{summary} function gives a more detailed view of the constraints. -<<>>= -summary(pspec) -@ - - -Another common constraint that can be added is a group constraint. Group constraints are currently supported by the ROI, DEoptim, and random portfolio solvers. The following code groups the assets such that the first 3 assets are grouped together labeled GroupA and the fourth asset is in its own group labeled GroupB. The \code{group\_min} argument specifies that the sum of the weights in GroupA must be greater than or equal to 0.1 and the sum of the weights in GroupB must be greater than or equal to 0.15. The \code{group\_max} argument specifies that the sum of the weights in GroupA must be less than or equal to 0.85 and the sum of the weights in GroupB must be less than or equal to 0.55.The \code{group\_labels} argument is optional and is useful for labeling groups in terms of market capitalization, sector, etc. -<<>>= -# Add group constraints -pspec <- add.constraint(portfolio=pspec, type="group", - groups=c(3, 1), - group_min=c(0.1, 0.15), - group_max=c(0.85, 0.55), - group_labels=c("GroupA", "GroupB")) -@ - -A position limit constraint can be added to limit the number of assets with non-zero, long, or short positions. The ROI solver used for maximizing return and ETL/ES/cVaR objectives support position limit constraints for \code{max\_pos} (i.e. using the glpk plugin). \code{max\_pos} is not supported for the ROI solver using the quadprog plugin. Note that \code{max\_pos\_long} and \code{max\_pos\_short} are not supported for either ROI solver. Position limit constraints are fully supported for DEoptim and random solvers. - -<<>>= -# Add position limit constraint such that we have a maximum number of three assets with non-zero weights. -pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos=3) - -# Can also specify maximum number of long positions and short positions -# pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos_long=3, max_pos_short=3) -@ - -A target diversification can be specified as a constraint. Diversification is defined as $diversification = \sum_{i=1}^N w_i$ for $N$ assets. The optimizers work by applying a penalty if the diversification value is more than 5\% away from \code{div\_target}. -<<>>= -pspec <- add.constraint(portfolio=pspec, type="diversification", div_target=0.7) -@ - -A target turnover can be specified as a constraint. The turnover is calculated from a set of initial weights. The initial weights can be specified, by default they are the seed weights in the portfolio object. The optimizers work by applying a penalty if the turnover value is more than 5\% away from \code{turnover\_target}. Note that the turnover constraint is not currently supported for the ROI solvers. -<<>>= -pspec <- add.constraint(portfolio=pspec, type="turnover", turnover_target=0.2) -@ - -A target mean return can be specified as a constraint. -<<>>= -pspec <- add.constraint(portfolio=pspec, type="return", return_target=0.007) -@ - -This demonstrates adding constraints to the portfolio object. As an alternative to adding constraints directly to the portfolio object, constraints can be specified as separate objects. - -\subsection{specifying Constraints as Separate Objects} -The following examples will demonstrate how to specify constraints as separate objects for all constraints types. - -<<>>= -# full investment constraint -weight_constr <- weight_sum_constraint(min_sum=1, max_sum=1) - -# box constraint -box_constr <- box_constraint(assets=pspec$assets, min=0, max=1) - -# group constraint -group_constr <- group_constraint(assets=pspec$assets, groups=c(3, 1), - group_min=c(0.1, 0.15), - group_max=c(0.85, 0.55), - group_labels=c("GroupA", "GroupB")) - -# position limit constraint -poslimit_constr <- position_limit_constraint(assets=pspec$assets, max_pos=3) - -# diversification constraint -div_constr <- diversification_constraint(div_target=0.7) - -# turnover constraint -to_constr <- turnover_constraint(turnover_target=0.2) - -# target return constraint -ret_constr <- return_constraint(return_target=0.007) -@ - -\section{Adding Objectives} -Business objectives can be added to the portfolio object with \code{add.objective}. The \code{add.objective} function is the main function for adding and/or updating business objectives to the portfolio object. This function allows the user to specify the portfolio to add the objectives to, the type (currently 'return', 'risk', or 'risk\_budget'), name of the objective function, arguments to the objective function, and whether or not to enable the objective. If updating an existing constraint, the indexnum argument can be specified. - -Here we add a risk objective to minimize portfolio variance. Note that the name of the function must correspond to a function in R. Many functions are available in the PerformanceAnalytics package. -<<>>= -pspec <- add.objective(portfolio=pspec, - type='risk', - name='var', - enabled=TRUE) -@ - -The portfolio object now has 1 object in the objectives list for the risk objective we just added. -<<>>= -print(pspec$objectives) -@ - -We now have a portfolio object with the following constraints and objectives to pass to \code{optimize.portfolio}. -\begin{itemize} - \item Constraints - \begin{itemize} - \item weight\_sum: The weights sum to 1 (i.e. full investment constraint) - \item box: minimum weight of any asset must be greater than or equal to 0.05 and the maximum weight of any asset must be less than or equal to 0.4. -\end{itemize} - \item Objectives - \begin{itemize} - \item risk objective: minimize portfolio var(iance). -\end{itemize} - -\end{itemize} - -\section{Optimization} -Note that this currently does not work, but is how I envision the portfolio object replacing the current constraint object. -<<>>= -#out <- optimize.portfolio(R=returns, portfolio=pspec, optimize_method="ROI") -@ - - -\end{document} \ No newline at end of file Copied: pkg/PortfolioAnalytics/vignettes/DesignThoughts.Rnw (from rev 2796, pkg/PortfolioAnalytics/inst/doc/DesignThoughts.Rnw) =================================================================== --- pkg/PortfolioAnalytics/vignettes/DesignThoughts.Rnw (rev 0) +++ pkg/PortfolioAnalytics/vignettes/DesignThoughts.Rnw 2013-08-16 16:31:25 UTC (rev 2798) @@ -0,0 +1,186 @@ +%!TEX root = ../39.tex + +\documentclass[12pt,letterpaper,english]{article} +\pagestyle{plain} \setlength{\textheight}{21cm} +\setlength{\textwidth}{15cm} \setlength{\footskip}{1.0cm} + +%\usepackage{harvard} +\usepackage{makeidx} % allows index generation +\usepackage{graphicx} % standard LaTeX graphics tool + % when including figure files +\usepackage{multicol} % used for the two-column index +\usepackage[bottom]{footmisc}% places footnotes at page bottom +\usepackage{ctable} +\usepackage{graphicx} +\usepackage{caption} +\usepackage{subcaption} +\usepackage{verbatim} +\usepackage{hyperref} +% see the list of further useful packages +% in the Springer Reference Guide, Sects. 2.3, 3.1-3.3 + +\usepackage[longnamesfirst]{natbib} +\usepackage{rotating} + +%\usepackage{noweb} +%\usepackage[ae,hyper]{Rd} +\usepackage{Rd} +\usepackage{Sweave} +\usepackage{graphicx} + +\usepackage{booktabs} +\usepackage{enumerate} +\usepackage{url} +%\usepackage[bottomafter,first]{draftcopy} + +\newcommand{\rem}[1]{} + + +%\numberwithin{equation}{section} +\renewcommand{\baselinestretch}{1.3} + + +\newcommand{\afterbox}{\bigskip + \normalsize + \lineskip=.45ex + \baselineskip 3.5ex + \noindent + \textheight 19.5cm} + +\newcommand{\argmin}{{\operatorname{argmin}}} +\newcommand{\argmax}{{\operatorname{argmax}}} +\newcommand{\med}{{\operatorname{med}}} +\newcommand{\ave}{{\operatorname{ave}}} +\newcommand{\Tr}{{\operatorname{Tr}}} +%\newcommand{\plim}{{\operatorname{plim}}} + +\renewcommand{\cite}{\citeasnoun} + +\def \vec{{\mbox{vec }}} +\def \unvec{{\mbox{unvec}}} +\def \diag{{\mbox{diag }}} + + +\synctex=1 + +\setlength{\parindent}{0mm} \setlength{\parskip}{1.5mm} + +\newcommand{\comm}[1]{\begin{quote}{\large \bf (#1)}\end{quote}} + +\begin{document} +\vspace{-2cm} +%\baselineskip=20pt +\renewcommand{\baselinestretch}{1} +\title{Discussion of Upcoming and Desired \\ Design and Coding Decisions \\ in PortfolioAnalytics (\citeyear{PortfolioAnalytics})} + + + +\author{ +Brian G. Peterson\\ +brian at braverock.com \\ \\ \\ } + +\date{\today} + +\maketitle +\thispagestyle{empty} % produces no numbering on the first page + +\renewcommand{\baselinestretch}{1} +\begin{abstract} +PortfolioAnalytics has grown significantly since it was initially written, and will continue to grow and change, especially with the increased involvement of additional portfolio managers and with work on the next version of Doug Martin et. al.'s \emph{Modern Portfolio Optimization}\citep{Scherer2005} + +This document lays out Some current state information, some design musings, and some things that we desire to address. + +It is my hope that interested readers and users will give us feedback, and possibly even code contributions, to make some of this functionality a reality. +\end{abstract} + +%\end{titlepage} + +%\baselineskip=20pt +\newpage +\renewcommand{\baselinestretch}{1.3} + +\tableofcontents + +\newpage + +\section{Introduction} +%% \ref{sec:CVaRbudgets} +%%\section{Portfolio CVaR budgets \label{sec:CVaRbudgets}} +%%\subsection{Definition} +Difference between constraints and objectives + +Renaming constraints object, separating constraints and objectives, collections of objectives multi-objective optimization + +Specifying constraints + +Relaxing constraints penalized versus direct + + + +\section{Current State \label{sec:currentstate}} +Our overall goal with Portfolioanalytics is to allow the use of many different portfolio solvers to solve the same portfolio problem specification. + +\subsection{Constraints Current State\label{sec:constraints}} +On the constraints front, this includes support for box constraints, inequality constraints, turnover, and full investment. + +\begin{description} + +\item[ Box Constraints ] box constraints (min/max) are supported for all optimization engines, this is a basic feature of any optimization solver in R. It is worth noting that \emph{most} optimization solvers in R support \emph{only} box constraints. +\\ Box constraints are of course necessary, but not sufficient, and a portfolio that satisfies the box constraints may very well not satisfy any number of other constraints on the combinations of weights. + +\item[ Full Investment Constraint ] a full investment constraint or min\_sum, max\_sum constraint is supported for all solvers inside the constrained\_objective function using either a normalization method or a penalty method +The normalization method simply scales the supplied parameters to either the minimum or the maximum if the sum is outside the allowed bounds. This has the advantage of being fast, but the disadvantages of decreasing the number of portfolios that will be tested that have assets that use your max/min box constraints, because of the scaling effects. You'll get close, but very few tested portfolios, perhaps none, will have assets that exactly hit the max or min box constrained weights. +\\ The penalty method simply uses the absolute value of the difference for the sum of the weight vector from your min\_sum or max\_sum (depending on which was violated) as part of the penalized objective output that the solver is working on. This has the advantage that you are not doing transformations on the supplied weight vector inside the constrained\_objective function, but the disadvantage that it may take the solver a long time to find a population that meets the objectives. +\\ For the ROI solvers, min\_sum and max\_sum are supported as linear constraints on the sum of the weights vector. + +\item[Linear Inequality Constraints] individual linear inequality constraints are currently supported only for the ROI solvers, because ROI supports these types of constraints for at least Rglpk and quadprog solvers, and apparently also several other solvers that directly support these types of constraints, such as ipop. These constraints can indicate that specific assets must be larger or smaller than other assets in the portfolio. + +\item[Group Inequality Constraints]linear group inequality constraints are currently supported only for the ROI solvers. These constraints can separate the portfolio assets into groups, and apply basic inequalities "$>$","$<$","$>=$", "$<=$", etc to them so that the sum of the weights in the group must satisfy the constraint versus another group or groups. + + +\item[Turnover] turnover is currently supported as an objective that can be added to your overall portfolio objectives, and is handled inside constrained objective. I think that it could also be handled as a true constraint, which I'll discuss in a bit. + +\end{description} + +\section{Improving the Current State} + + +\subsection{Modularize Constraints} +Today, the box constraints are included in the \code{constraints} constructor. It would be better to have a portfolio specification object that included multiple sections: constraints, objectives, assets. (this is how the object is organized, but that's not obvious to the user). I think that we should have an \code{add.constraint} function like \code{add.objective} that would add specific types of constraints: + \begin{enumerate} + \item box constraints + \item asset inequality constraints + \item group inequality constraints + \item turnover constraint + \item full investment or leverage constratint + \end{enumerate} + +\subsection{Creating a mapping function} + +\code{DEoptim} contains the ability to use a \emph{mapping function} to manage constraints, but there are no generalized mapping functions available. The mapping function takes in a vector of optimization parameters(weights), tests it for feasibility, and either transforms the vector to a vector of feasible weights or provides a penalty term to add to the objective value. For PortfolioAnalytics, I think that we can write a constraint mapping function that could do the trick, even with general optimization solvers that use only box constraints. +The mapping function should have the following features: + \begin{itemize} + \item[methods:] the methods should be able to be turned on and off, and applied in different orders. For some constraint mapping, it will be important to do things in a particular order. Also, for solvers that support certain types of constraints directly, it will be important to use the solver's features, and not use the corresponding mapping functionality. + \item[layering:] \code{ROI} constains function \code{rbind.L\_constraint}, which combines the various linear inequality constraints into a single model. We should examine this and see if we can make use of it, or something like it, to create a consolidated inequality constraint mapping capability + \item[relocatable:] for some solvers such as \code{DEoptim}, the solver can use the mapping function to only evaluate the objective for portfolios that meet the constraints. For other solvers, where only box constraints are supported, we will need to either penalize or transform (see discussion above in \ref{sec:currentstate}) weights vector later in the process, inside the \code{constrained\_objective} function. + \item[relax constraints:]we need the ability to relax infeasible constraints, either via the penalty method (just find the closest) or when transforming the weights vector to a feasible set. see also \ref{sec:penalty} for a discussion of adaptive penalties. + + \end{itemize} + +The mapping function could easily be incorporated directly into random portfolios, and some code might even move from random portfolios into the mapping function. DEoptim can call the mapping function directly when generating a population. For other solvers, we'll need to read the constraint specification, determine what types of constraints need to be applied, and utilize any solver-specific functionality to support as many of them as possible. For any remaining constraints that the solver cannot apply directly, we can call the mapping function to either penalize or transfrom the weights on the remaining methods inside \code{constrained\_objective}. + + +\subsection{Penalty Functions \label{sec:penalty}} +The current state uses a user-defined fixed penalty function for each objective, multiplying the exceedence or difference of the objective from a target against a fixed penalty. This requires the user to understand how penalty terms influence optimization, and to modify their terms as required. + +The multiple papers since 2003 or so suggest that an adaptive penalty that changes with the number of iterations and the variability of the answers for the objective can significantly improve convergence. [add references]. As we re-do the constraints, we should consider applying the penalty there for small feasible constraint areas (e.g. after trying several hundred times to get a random draw that fits, use adaptively penalized contraints to get as close as possible), but even more importantly perhaps we should support an adaptive constraint in the objectives. + +Several different methods have been proposed. In the case of constraints, +penalties should probably be relaxed as more infeasible solutions are found, as the feasible space is likely to be small. In the sase of objectives, arguably the opposte is true, where penalties should increase as the number of iterations increases, to speed convergence to a single solution, hopefully at or near the global minima. + + +\section{Bibliography} +\bibliographystyle{chicago} +\bibliography{PA} + +\end{document} Copied: pkg/PortfolioAnalytics/vignettes/PA.bib (from rev 2796, pkg/PortfolioAnalytics/inst/doc/PA.bib) =================================================================== --- pkg/PortfolioAnalytics/vignettes/PA.bib (rev 0) +++ pkg/PortfolioAnalytics/vignettes/PA.bib 2013-08-16 16:31:25 UTC (rev 2798) @@ -0,0 +1,434 @@ +% This file was created with JabRef 2.5. +% Encoding: UTF-8 + + at ARTICLE{Ardia2010, + author = {Ardia, David and Boudt, Kris and Carl, Peter and Mullen, Katharine + and Peterson, Brian}, + title = {Differential evolution (DEoptim) for non-convex portfolio optimization}, + journal = {Mimeo}, + year = {2010}, + owner = {Administrator}, + timestamp = {2010.05.30} +} + + at MANUAL{DEoptim, + title = {{DEoptim}: Differential Evolution Optimization in {R}}, + author = {Ardia, David and Mullen, Katharine}, + year = {2009}, + note = {R package version 2.00-04}, + url = {http://CRAN.R-project.org/package=DEoptim} +} + + at ARTICLE{Bollerslev90, + author = {Bollerslev, T.}, + title = {Modeling the Coherence in Short-run Nominal Exchange Rates: A Multivariate + Generalized {ARCH} Model}, + journal = {Review of Economics and Statistics}, + year = {1990}, + volume = {72}, + pages = {498-505}, + owner = {Administrator}, + timestamp = {2010.06.14} +} + + at MISC{BoudtCarlPeterson2010, + author = {Boudt, Kris and Carl, Peter and Peterson, Brian G.}, + title = {Portfolio Optimization with Conditional Value-at-Risk Budgets}, + month = jan, + year = {2010}, + owner = {ardiad}, + timestamp = {2010.02.09} +} + + + at MISC{PortfolioAnalytics, + author = {Kris Boudt and Peter Carl and Brian G. Peterson}, + title = {{PortfolioAnalytics}: Portfolio Analysis, including numeric methods + for optimization of portfolios}, + year = {2012}, + note = {R package version 0.8.2}, + owner = {brian}, + timestamp = {2012.09.01}, + url = {https://r-forge.r-project.org/projects/returnanalytics/} +} + at INPROCEEDINGS{BoudtPetersonCarl2008, + author = {Boudt, Kris and Peterson, Brian G and Carl, Peter}, + title = {Hedge Fund Portfolio Selection with Modified Expected Shortfall}, + booktitle = {Computational Finance and its Applications III}, + year = {2008}, + editor = {Brebbia, C. and Constantino, M. and Larran, M.}, + series = {WIT Transactions on Modelling and Simulation}, + publisher = {WIT, Southampton}, + owner = {Administrator}, + timestamp = {2010.02.01} +} + + at ARTICLE{Boudt2008, + author = {Boudt, Kris and Peterson, Brian G. and Croux, Christophe}, + title = {Estimation and Decomposition of Downside Risk for Portfolios with + Non-Normal Returns}, + journal = {Journal of Risk}, + year = {2008}, + volume = {11}, + pages = {79-103}, + number = {2}, + keywords = {Value at Risk, VaR, Component Value at Risk, Expected Shortfall, ES, + Conditional Value at Risk, CVaR, risk contribution, portfolio allocation, + Cornish-Fisher expansion, Edgeworth expansion}, + owner = {brian}, + timestamp = {2007.09.12} +} + + at ARTICLE{Burns2010, + author = {Burns}, + title = {http://www.burns-stat.com/pages/Finance/random_portfolios.html}, + owner = {Administrator}, + timestamp = {2010.05.30} +} + + at ARTICLE{BornerHigginsKantelhardtScheiter2007, + author = {B{\"{o}}rner, Jan and Higgins, Steven I. and Kantelhardt, Jochen + and Scheiter, Simon}, + title = {Rainfall or Price Variability: What Determines Rangeland Management + Decisions? A Simulation-Optimization Approach to {S}outh {A}frican + Savanas}, + journal = {Agricultural Economics}, + year = {2007}, + volume = {37}, + pages = {189-200}, + number = {2--3}, + month = sep # {--} # nov, + doi = {10.1111/j.1574-0862.2007.00265.x}, + owner = {ardiad}, + timestamp = {2009.12.03} +} + + at ARTICLE{CaoVilarDevia2009, + author = {Cao, Ricardo and Vilar, Juan M. and Devia, Andres}, + title = {Modelling Consumer Credit Risk via Survival Analysis}, + journal = {Statistics \& Operations Research Transactions}, + year = {2009}, + volume = {33}, + pages = {3-30}, + number = {1}, + month = jan # {-} # jun, + owner = {ardiad}, + timestamp = {2009.12.03} +} + + at MISC{Carl2007, + author = {Peter Carl and Brian G. Peterson}, + title = {{PerformanceAnalytics}: Econometric Tools for Performance and Risk + Analysis in {R}}, + year = {2009}, + note = {R package version 1.0.0}, + owner = {brian}, + timestamp = {2008.02.01}, + url = {http://braverock.com/R/} +} + + at MISC{CarlPetersonBoudt2010, + author = {Carl, Peter and Peterson, Brian G. and Boudt, Kris}, + title = {Business Objectives and Complex Portfolio Optimization}, + howpublished = {Presentation at R/Finance 2010. Available at: \url{http://www.rinfinance.com/agenda/2010/Carl+Peterson+Boudt_Tutorial.pdf}}, + year = {2010}, + owner = {ardiad}, + timestamp = {2010.02.09} +} + + at MANUAL{foreach, + title = {foreach: Foreach looping construct for R}, + author = {REvolution Computing}, + year = {2009}, + note = {R package version 1.3.0}, + url = {http://CRAN.R-project.org/package=foreach} +} + + at ARTICLE{Cornish1937, + author = {Cornish, Edmund A. and Fisher, Ronald A.}, + title = {Moments and Cumulants in the Specification of Distributions}, + journal = {Revue de l'Institut International de Statistique}, + year = {1937}, + volume = {5}, + pages = {307-320}, + number = {4}, + owner = {brian}, + timestamp = {2007.08.19} +} + + at ARTICLE{Favre2002, + author = {Favre, Laurent and Galeano, Jose-Antonio}, + title = {Mean-Modified Value-at-Risk Optimization with Hedge Funds}, + journal = {Journal of Alternative Investment}, + year = {2002}, + volume = {5}, + pages = {2-21}, + number = {2}, + owner = {brian}, + timestamp = {2007.07.25} +} + + at INCOLLECTION{GilliMaringerWinker2008, + author = {Gilli, Manfred and Maringer, Dietmar G. and Winker, Peter}, + title = {Applications of Heuristics in Finance}, + booktitle = {Handbook on Information Technology in Finance}, + publisher = {Springer-Verlag}, + year = {2008}, + editor = {Schlottmann, D. and Weinhardt, C. and Schlottmann, F.}, + chapter = {26}, + address = {Berlin, Heidelberg}, + owner = {ardiad}, + timestamp = {2010.02.07} +} + + at MISC{GilliSchumann2009, + author = {Gilli, Mandfred and Schumann, Enrico}, + title = {Heuristic Optimisation in Financial Modelling}, + howpublished = {COMISEF wps-007 09/02/2009}, + year = {2009}, + owner = {ardiad}, + timestamp = {2010.02.07} +} + + at MISC{GilliWinker2008, + author = {Gilli, Mandfred and Winker, Peter}, + title = {A Review of Heuristic Optimization Methods in Econometrics}, + howpublished = {Swiss Institute Research paper series 08-12}, + month = dec, + year = {2008}, + owner = {ardiad}, + timestamp = {2010.02.07} +} + + at ARTICLE{HigginsKantelhardtScheiterBoerner2007, + author = {Higgins, Steven I. and Kantelhardt, Jochen and Scheiter, Simon and + Boerner, Jan}, + title = {Sustainable Management of Extensively Managed Savanna Rangelands}, + journal = {Ecological Economics}, + year = {2007}, + volume = {62}, + pages = {102-114}, + number = {1}, + month = apr, + doi = {10.1016/j.ecolecon.2006.05.019}, + owner = {ardiad}, + timestamp = {2009.12.03} +} + + at BOOK{Holland1975, + title = {Adaptation in Natural Artificial Systems}, + publisher = {University of Michigan Press}, + year = {1975}, + author = {Holland, John H.}, + address = {Ann Arbor} +} + + at ARTICLE{KrinkMittnikPaterlini2009, + author = {Krink, Thiemo and Mittnik, Stefan and Paterlini, Sandra}, + title = {Differential Evolution and Combinatorial Search for Constrained Index-Tracking}, + journal = {Annals of Operations Research}, + year = {2009}, + volume = {172}, + pages = {153-176}, + doi = {10.1007/s10479-009-0552-1}, + owner = {ardiad}, + timestamp = {2010.02.05} +} + + at ARTICLE{KrinkPaterlini2009, + author = {Krink, Thiemo and Paterlini, Sandra}, + title = {Multiobjective Optimization using Differential Evolution for Real-World + Portfolio Optimization}, + journal = {Computational Management Science}, + year = {2009}, + doi = {10.1007/s10287-009-0107-6}, + owner = {ardiad}, + timestamp = {2010.02.05} +} + + at MISC{Lampinen2009, + author = {Lampinen, Jouni A.}, + title = {A Bibliography of Differential Evolution Algorithm}, + year = {2009}, + owner = {ardiad}, + timestamp = {2010.02.05}, + url = {http://www2lutfi/~jlampine/debibliohtm} +} + + at INCOLLECTION{Maringer2005, + author = {Maringer, Dietmar G.}, + title = {Portfolio Management with Heuristic Optimization}, + booktitle = {Advanced in Computational Management Science}, + publisher = {Springer-Verlag}, + year = {2005}, + volume = {8}, + series = {Advances in Computational Management Science}, + chapter = {14}, + owner = {ardiad}, + timestamp = {2010.02.07} +} + + at ARTICLE{MaringerMeyer2008, + author = {Maringer, Dietmar G. and Meyer, Mark}, + title = {Smooth Transition Autoregressive Models. New Approaches to the Model + Selection Problem}, + journal = {Studies in Nonlinear Dynamics \& Econometrics}, + year = {2008}, + volume = {12}, + pages = {1-19}, + number = {1}, + month = jan, + note = {Article nr. 5}, + file = {MaringerMeyer_SmoothTransitionAutoregressiveModelsNewApproachesToTheModelSelectionProblem.PDF:MaringerMeyer_SmoothTransitionAutoregressiveModelsNewApproachesToTheModelSelectionProblem.PDF:PDF}, + owner = {ardiad}, + timestamp = {2010.02.07}, + url = {http://www.bepress.com/snde/vol12/iss1/} +} + + at ARTICLE{MaringerOyewumi2007, + author = {Maringer, Dietmar G. and Oyewumi, Olufemi}, + title = {Index Tracking with Constrained Portfolios}, + journal = {Intelligent Systems in Accounting, Finance \& Management}, + year = {2007}, + volume = {15}, + pages = {57-71}, + number = {1--2}, + doi = {10.1002/isaf.285}, + owner = {ardiad}, + timestamp = {2010.02.05} +} + + at BOOK{Mitchell1998, + title = {An Introduction to Genetic Algorithms}, + publisher = {The MIT Press}, + year = {1998}, + author = {Mitchell, Melanie} +} + + at MISC{MullenArdiaGilWindoverCline2009, + author = {Mullen, Katharine M. and Ardia, David and Gil, David L. and Windover, + Donald and Cline, James}, + title = {{DEoptim}: An {R} Package for Global Optimization by Differential + Evolution}, [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2798 From noreply at r-forge.r-project.org Fri Aug 16 18:45:24 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 18:45:24 +0200 (CEST) Subject: [Returnanalytics-commits] r2799 - in pkg/PortfolioAnalytics: sandbox vignettes Message-ID: <20130816164524.39B71184CDB@r-forge.r-project.org> Author: braverock Date: 2013-08-16 18:45:23 +0200 (Fri, 16 Aug 2013) New Revision: 2799 Added: pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw pkg/PortfolioAnalytics/vignettes/ROI_vignette.pdf pkg/PortfolioAnalytics/vignettes/portfolio_vignette.pdf Removed: pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw pkg/PortfolioAnalytics/sandbox/ROI_vignette.pdf pkg/PortfolioAnalytics/sandbox/portfolio_vignette.pdf Log: - move ROI-vignette and compiled pdfs to vignettes dir Deleted: pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw 2013-08-16 16:31:25 UTC (rev 2798) +++ pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw 2013-08-16 16:45:23 UTC (rev 2799) @@ -1,141 +0,0 @@ -\documentclass[12pt,letterpaper,english]{article} -\usepackage[OT1]{fontenc} -\usepackage{Sweave} -\usepackage{verbatim} -\usepackage{Rd} -\usepackage{amsmath} - -\begin{document} - -\title{Using the ROI solvers with PortfolioAnalytics} -\author{Ross Bennett} - -\maketitle - -\begin{abstract} -The purpose of this vignette is to demonstrate a sample of the optimzation problems that can be solved in PortfolioAnalytics with the ROI solvers. See \code{demo(demo\_ROI)} for a more complete set of examples. -\end{abstract} - -\tableofcontents - -\section{Getting Started} -\subsection{Load Packages} -Load the necessary packages. -<<>>= -suppressMessages(library(PortfolioAnalytics)) -suppressMessages(library(Rglpk)) -suppressMessages(library(foreach)) -suppressMessages(library(iterators)) -suppressMessages(library(ROI)) -suppressMessages(require(ROI.plugin.glpk)) -suppressMessages(require(ROI.plugin.quadprog)) -@ - -\subsection{Data} -The edhec data set from the PerformanceAnalytics package will be used as example data. -<<>>= -data(edhec) - -# Use the first 4 columns in edhec for a returns object -returns <- edhec[, 1:4] -print(head(returns, 5)) - -# Get a character vector of the fund names -funds <- colnames(returns) -@ - - -\section{Maximizing Mean Return} -The objective to maximize mean return is a linear problem of the form: -\begin{equation*} - \begin{aligned} - & \underset{\boldsymbol{w}}{\text{maximize}} - & & \hat{\boldsymbol{\mu}}' \boldsymbol{w} \\ - \end{aligned} -\end{equation*} - -Where $\hat{\boldsymbol{\mu}}$ is the estimated mean asset returns and $\boldsymbol{w}$ is the set of weights. Because this is a linear problem, it is well suited to be solved using a linear programming solver. For these types of problems, PortfolioAnalytics uses the ROI package with the glpk plugin. - -\subsection{Portfolio Object} - -The first step is to create the portfolio object. Then add constraints and a return objective. -<<>>= -# Create portfolio object -portf_maxret <- portfolio.spec(assets=funds) - -# Add constraints to the portfolio object -portf_maxret <- add.constraint(portfolio=portf_maxret, type="full_investment") -portf_maxret <- add.constraint(portfolio=portf_maxret, type="box", - min=c(0.02, 0.05, 0.03, 0.02), - max=c(0.55, 0.6, 0.65, 0.5)) - -# Add objective to the portfolio object -portf_maxret <- add.objective(portfolio=portf_maxret, type="return", name="mean") -@ - -The print method for the portfolio object shows a high level overview while the summary method shows much more detail of the assets, constraints, and objectives that are specified in the portfolio object. -<<>>= -print(portf_maxret) -summary(portf_maxret) -@ - -\subsection{Optimization} -The next step is to run the optimization. Note that \code{optimize\_method="ROI"} is specified in the call to \code{optimize.portfolio} to select the solver used for the optimization. -<<>>= -# Run the optimization -opt_maxret <- optimize.portfolio(R=returns, portfolio=portf_maxret, optimize_method="ROI") -@ - -The print method for the \code{opt\_maxret} object shows the call, optimal weights, and the objective measure -<<>>= -print(opt_maxret) -@ - -The sumary method for the \code{opt\_maxret} object shows details of the object with constraints, objectives, and other portfolio statistics. -<<>>= -summary(opt_maxret) -@ - - -The \code{opt\_maxret} object is of class \code{optimize.portfolio.ROI} and contains the following elements. Objects of class \code{optimize.portfolio.ROI} are S3 objects and elements can be accessed with the \code{\$} operator. -<<>>= -names(opt_maxret) -@ - -The optimal weights and value of the objective function at the optimum can be accessed with the \code{extractStats} function. -<<>>= -extractStats(opt_maxret) -@ - -The optimal weights can be accessed with the \code{extractWeights} function. -<<>>= -extractWeights(opt_maxret) -@ - -\subsection{Visualization} -The chart of the optimal weights as well as the box constraints can be created with \code{chart.Weights.ROI}. The blue dots are the optimal weights and the gray triangles are the \code{min} and \code{max} of the box constraints. -<>= -chart.Weights.ROI(opt_maxret) -@ - -The optimal portfolio can be plotted in risk-return space along with other feasible portfolios. The return metric is defined in the \code{return.col} argument and the risk metric is defined in the \code{risk.col} argument. The scatter chart includes the optimal portfolio (blue dot) and other feasible portfolios (gray circles) to show the overall feasible space given the constraints. By default, if \code{rp} is not passed in, the feasible portfolios are generated with \code{random\_portfolios} to satisfy the constraints of the portfolio object. - -Volatility as the risk metric -<>= -chart.Scatter.ROI(opt_maxret, R=returns,return.col="mean", risk.col="sd", main="Maximum Return") -@ - -Expected tail loss as the risk metric -<>= -chart.Scatter.ROI(opt_maxret, R=returns, return.col="mean", risk.col="ETL", main="Maximum Return", invert=FALSE, p=0.9) -@ - -\subsection{Backtesting} -An out of sample backtest is run with \code{optimize.portfolio.rebalancing}. In this example, an initial training period of 36 months is used and the portfolio is rebalanced quarterly. -<<>>= -bt_maxret <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_maxret, optimize_method="ROI", rebalance_on="quarters", training_period=36, trace=TRUE) -@ - -The \code{bt\_maxret} object is a list containing the optimal weights and objective measure at each rebalance period. - -\end{document} \ No newline at end of file Deleted: pkg/PortfolioAnalytics/sandbox/ROI_vignette.pdf =================================================================== (Binary files differ) Deleted: pkg/PortfolioAnalytics/sandbox/portfolio_vignette.pdf =================================================================== (Binary files differ) Copied: pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw (from rev 2798, pkg/PortfolioAnalytics/sandbox/ROI_vignette.Rnw) =================================================================== --- pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw (rev 0) +++ pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw 2013-08-16 16:45:23 UTC (rev 2799) @@ -0,0 +1,141 @@ +\documentclass[12pt,letterpaper,english]{article} +\usepackage[OT1]{fontenc} +\usepackage{Sweave} +\usepackage{verbatim} +\usepackage{Rd} +\usepackage{amsmath} + +\begin{document} + +\title{Using the ROI solvers with PortfolioAnalytics} +\author{Ross Bennett} + +\maketitle + +\begin{abstract} +The purpose of this vignette is to demonstrate a sample of the optimzation problems that can be solved in PortfolioAnalytics with the ROI solvers. See \code{demo(demo\_ROI)} for a more complete set of examples. +\end{abstract} + +\tableofcontents + +\section{Getting Started} +\subsection{Load Packages} +Load the necessary packages. +<<>>= +suppressMessages(library(PortfolioAnalytics)) +suppressMessages(library(Rglpk)) +suppressMessages(library(foreach)) +suppressMessages(library(iterators)) +suppressMessages(library(ROI)) +suppressMessages(require(ROI.plugin.glpk)) +suppressMessages(require(ROI.plugin.quadprog)) +@ + +\subsection{Data} +The edhec data set from the PerformanceAnalytics package will be used as example data. +<<>>= +data(edhec) + +# Use the first 4 columns in edhec for a returns object +returns <- edhec[, 1:4] +print(head(returns, 5)) + +# Get a character vector of the fund names +funds <- colnames(returns) +@ + + +\section{Maximizing Mean Return} +The objective to maximize mean return is a linear problem of the form: +\begin{equation*} + \begin{aligned} + & \underset{\boldsymbol{w}}{\text{maximize}} + & & \hat{\boldsymbol{\mu}}' \boldsymbol{w} \\ + \end{aligned} +\end{equation*} + +Where $\hat{\boldsymbol{\mu}}$ is the estimated mean asset returns and $\boldsymbol{w}$ is the set of weights. Because this is a linear problem, it is well suited to be solved using a linear programming solver. For these types of problems, PortfolioAnalytics uses the ROI package with the glpk plugin. + +\subsection{Portfolio Object} + +The first step is to create the portfolio object. Then add constraints and a return objective. +<<>>= +# Create portfolio object +portf_maxret <- portfolio.spec(assets=funds) + +# Add constraints to the portfolio object +portf_maxret <- add.constraint(portfolio=portf_maxret, type="full_investment") +portf_maxret <- add.constraint(portfolio=portf_maxret, type="box", + min=c(0.02, 0.05, 0.03, 0.02), + max=c(0.55, 0.6, 0.65, 0.5)) + +# Add objective to the portfolio object +portf_maxret <- add.objective(portfolio=portf_maxret, type="return", name="mean") +@ + +The print method for the portfolio object shows a high level overview while the summary method shows much more detail of the assets, constraints, and objectives that are specified in the portfolio object. +<<>>= +print(portf_maxret) +summary(portf_maxret) +@ + +\subsection{Optimization} +The next step is to run the optimization. Note that \code{optimize\_method="ROI"} is specified in the call to \code{optimize.portfolio} to select the solver used for the optimization. +<<>>= +# Run the optimization +opt_maxret <- optimize.portfolio(R=returns, portfolio=portf_maxret, optimize_method="ROI") +@ + +The print method for the \code{opt\_maxret} object shows the call, optimal weights, and the objective measure +<<>>= +print(opt_maxret) +@ + +The sumary method for the \code{opt\_maxret} object shows details of the object with constraints, objectives, and other portfolio statistics. +<<>>= +summary(opt_maxret) +@ + + +The \code{opt\_maxret} object is of class \code{optimize.portfolio.ROI} and contains the following elements. Objects of class \code{optimize.portfolio.ROI} are S3 objects and elements can be accessed with the \code{\$} operator. +<<>>= +names(opt_maxret) +@ + +The optimal weights and value of the objective function at the optimum can be accessed with the \code{extractStats} function. +<<>>= +extractStats(opt_maxret) +@ + +The optimal weights can be accessed with the \code{extractWeights} function. +<<>>= +extractWeights(opt_maxret) +@ + +\subsection{Visualization} +The chart of the optimal weights as well as the box constraints can be created with \code{chart.Weights.ROI}. The blue dots are the optimal weights and the gray triangles are the \code{min} and \code{max} of the box constraints. +<>= +chart.Weights.ROI(opt_maxret) +@ + +The optimal portfolio can be plotted in risk-return space along with other feasible portfolios. The return metric is defined in the \code{return.col} argument and the risk metric is defined in the \code{risk.col} argument. The scatter chart includes the optimal portfolio (blue dot) and other feasible portfolios (gray circles) to show the overall feasible space given the constraints. By default, if \code{rp} is not passed in, the feasible portfolios are generated with \code{random\_portfolios} to satisfy the constraints of the portfolio object. + +Volatility as the risk metric +<>= +chart.Scatter.ROI(opt_maxret, R=returns,return.col="mean", risk.col="sd", main="Maximum Return") +@ + +Expected tail loss as the risk metric +<>= +chart.Scatter.ROI(opt_maxret, R=returns, return.col="mean", risk.col="ETL", main="Maximum Return", invert=FALSE, p=0.9) +@ + +\subsection{Backtesting} +An out of sample backtest is run with \code{optimize.portfolio.rebalancing}. In this example, an initial training period of 36 months is used and the portfolio is rebalanced quarterly. +<<>>= +bt_maxret <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_maxret, optimize_method="ROI", rebalance_on="quarters", training_period=36, trace=TRUE) +@ + +The \code{bt\_maxret} object is a list containing the optimal weights and objective measure at each rebalance period. + +\end{document} \ No newline at end of file Copied: pkg/PortfolioAnalytics/vignettes/ROI_vignette.pdf (from rev 2798, pkg/PortfolioAnalytics/sandbox/ROI_vignette.pdf) =================================================================== (Binary files differ) Copied: pkg/PortfolioAnalytics/vignettes/portfolio_vignette.pdf (from rev 2798, pkg/PortfolioAnalytics/sandbox/portfolio_vignette.pdf) =================================================================== (Binary files differ) From noreply at r-forge.r-project.org Fri Aug 16 19:51:30 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 19:51:30 +0200 (CEST) Subject: [Returnanalytics-commits] r2800 - in pkg/PortfolioAnalytics: R man Message-ID: <20130816175130.E40CD1847F4@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-16 19:51:30 +0200 (Fri, 16 Aug 2013) New Revision: 2800 Modified: pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/man/add.constraint.Rd pkg/PortfolioAnalytics/man/box_constraint.Rd pkg/PortfolioAnalytics/man/constraint.Rd pkg/PortfolioAnalytics/man/constraint_v2.Rd pkg/PortfolioAnalytics/man/diversification_constraint.Rd pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd pkg/PortfolioAnalytics/man/group_constraint.Rd pkg/PortfolioAnalytics/man/position_limit_constraint.Rd pkg/PortfolioAnalytics/man/return_constraint.Rd pkg/PortfolioAnalytics/man/turnover_constraint.Rd pkg/PortfolioAnalytics/man/weight_sum_constraint.Rd Log: updating documentation for constraints Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-16 17:51:30 UTC (rev 2800) @@ -154,13 +154,13 @@ #' constructor for class v2_constraint #' -#' @param type character type of the constraint to add or update, currently 'weight_sum', 'box', or 'group' +#' This function is called by the constructor for the specific constraint. +#' +#' @param type character type of the constraint to add or update #' @param assets number of assets, or optionally a named vector of assets specifying seed weights #' @param ... any other passthru parameters #' @param constrclass character to name the constraint class #' @author Ross Bennett -#' @aliases constraint -#' @rdname constraint #' @export constraint_v2 <- function(type, enabled=TRUE, ..., constrclass="v2_constraint"){ if(!hasArg(type)) stop("you must specify a constraint type") @@ -181,18 +181,70 @@ #' General interface for adding and/or updating optimization constraints. #' -#' This is the main function for adding and/or updating constraints in an object of type \code{\link{portfolio}}. +#' This is the main function for adding and/or updating constraints to the \code{{portfolio}} object. #' -#' Special cases for the weight_sum constraint are "full_investment" and "dollar_nuetral" or "active" with appropriate values set for min_sum and max_sum. see \code{\link{weight_sum_constraint}} +#' The following constraint types are supported: +#' \itemize{ +#' \item{\code{weight_sum}, \code{weight}, \code{leverage}}{ Specify constraint on the sum of the weights, see \code{\link{weight_sum_constraint}}} +#' \item{\code{full_investment}}{ Special case to set \code{min_sum=1} and \code{max_sum=1} of weight sum constraints} +#' \item{\code{dollar_neutral}, \code{active}}{ Special case to set \code{min_sum=0} and \code{max_sum=0} of weight sum constraints} +#' \item{\code{box}}{ Specify constraints for the individual asset weights, see \code{\link{box_constraint}}} +#' \item{\code{long_only}}{ Special case to set \code{min=0} and \code{max=1} of box constraints} +#' \item{\code{group}}{ Specify a constraint on the sum of weights within groups and the number of assets with non-zero weights in groups, see \code{\link{group_constraint}}} +#' \item{\code{turnover}}{ Specify a constraint for target turnover. Turnover is calculated from a set of initial weights, see \code{\link{turnover_constraint}}} +#' \item{\code{diversification}}{ Specify a constraint for target diversification of a set of weights, see \code{\link{diversification_constraint}}} +#' \item{\code{position_limit}}{ Specify a constraint on the number of positions (i.e. assets with non-zero weights as well as the number of long and short positions, see \code{\link{position_limit_constraint}}} +#' \item{\code{return}}{ Specify a constraint for target mean return, see \code{\link{return_constraint}}} +#' \item{\code{factor_exposure}}{ Specify a constraint for risk factor exposures, see \code{\link{factor_exposure_constraint}}} +#' } #' #' @param portfolio an object of class 'portfolio' to add the constraint to, specifying the constraints for the optimization, see \code{\link{portfolio.spec}} #' @param type character type of the constraint to add or update, currently 'weight_sum' (also 'leverage' or 'weight'), 'box', 'group', 'turnover', 'diversification', 'position_limit', 'return', or 'factor_exposure' -#' @param enabled TRUE/FALSE +#' @param enabled TRUE/FALSE. The default is enabled=TRUE. #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify constraints -#' @param indexnum if you are updating a specific constraint, the index number in the $objectives list to update +#' @param indexnum if you are updating a specific constraint, the index number in the $constraints list to update #' @author Ross Bennett -#' @seealso \code{\link{constraint_v2}}, \code{\link{weight_sum_constraint}}, \code{\link{box_constraint}}, \code{\link{group_constraint}}, \code{\link{turnover_constraint}}, \code{\link{diversification_constraint}}, \code{\link{position_limit_constraint}} +#' @seealso \code{\link{weight_sum_constraint}}, \code{\link{box_constraint}}, \code{\link{group_constraint}}, \code{\link{turnover_constraint}}, \code{\link{diversification_constraint}}, \code{\link{position_limit_constraint}, \code{\link{return_constraint}, \code{\link{factor_exposure_constraint}} +#' @examples +#' data(edhec) +#' returns <- edhec[, 1:4] +#' fund.names <- colnames(returns) +#' pspec <- portfolio.spec(assets=fund.names) +#' # Add the full investment constraint that specifies the weights must sum to 1. +#' pspec <- add.constraint(portfolio=pspec, type="weight_sum", min_sum=1, max_sum=1) +#' # The full investment constraint can also be specified with type="full_investment" +#' pspec <- add.constraint(portfolio=pspec, type="full_investment") +#' +#' # Another common constraint is that portfolio weights sum to 0. +#' pspec <- add.constraint(portfolio=pspec, type="weight_sum", min_sum=0, max_sum=0) +#' pspec <- add.constraint(portfolio=pspec, type="dollar_neutral") +#' pspec <- add.constraint(portfolio=pspec, type="active") +#' +#' # Add box constraints +#' pspec <- add.constraint(portfolio=pspec, type="box", min=0.05, max=0.4) +#' +#' min and max can also be specified per asset +#' pspec <- add.constraint(portfolio=pspec, type="box", min=c(0.05, 0, 0.08, 0.1), max=c(0.4, 0.3, 0.7, 0.55)) +#' # A special case of box constraints is long only where min=0 and max=1 +#' # The default action is long only if min and max are not specified +#' pspec <- add.constraint(portfolio=pspec, type="box") +#' pspec <- add.constraint(portfolio=pspec, type="long_only") +#' +#' # Add group constraints +#' pspec <- add.constraint(portfolio=pspec, type="group", groups=c(3, 1), group_min=c(0.1, 0.15), group_max=c(0.85, 0.55), group_labels=c("GroupA", "GroupB"), group_pos=c(2, 1)) +#' +#' # Add position limit constraint such that we have a maximum number of three assets with non-zero weights. +#' pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos=3) +#' +#' # Add diversification constraint +#' pspec <- add.constraint(portfolio=pspec, type="diversification", div_target=0.7) +#' +#' # Add turnover constraint +#' pspec <- add.constraint(portfolio=pspec, type="turnover", turnover_target=0.2) +#' +#' # Add target mean return constraint +#' pspec <- add.constraint(portfolio=pspec, type="return", return_target=0.007) #' @export add.constraint <- function(portfolio, type, enabled=TRUE, message=FALSE, ..., indexnum=NULL){ # Check to make sure that the portfolio passed in is a portfolio object @@ -296,6 +348,7 @@ #' constructor for box_constraint. #' +#' Box constraints specify the upper and lower bounds on the weights of the assets. #' This function is called by add.constraint when type="box" is specified. see \code{\link{add.constraint}} #' #' @param type character type of the constraint @@ -416,6 +469,7 @@ #' constructor for group_constraint #' +#' Group constraints specify the grouping of the assets, weights of the groups, and number of postions (i.e. non-zero weights) iof the groups. #' This function is called by add.constraint when type="group" is specified. see \code{\link{add.constraint}} #' #' @param type character type of the constraint @@ -495,14 +549,14 @@ #' constructor for weight_sum_constraint #' +#' THe constraint specifies the upper and lower bound that the weights sum to. #' This function is called by add.constraint when "weight_sum", "leverage", "full_investment", "dollar_neutral", or "active" is specified as the type. see \code{\link{add.constraint}} -#' This function allows the user to specify the minimum and maximum that the weights sum to #' #' Special cases for the weight_sum constraint are "full_investment" and "dollar_nuetral" or "active" #' -#' If type="full_investment", min_sum=1 and max_sum=1 +#' If \code{type="full_investment"}, \code{min_sum=1} and \code{max_sum=1} #' -#' If type="dollar_neutral" or type="active", min_sum=0, and max_sum=0 +#' If \code{type="dollar_neutral"} or \code{type="active"}, \code{min_sum=0}, and \code{max_sum=0} #' #' @param type character type of the constraint #' @param min_sum minimum sum of all asset weights, default 0.99 @@ -511,6 +565,7 @@ #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify weight_sum constraints #' @author Ross Bennett +#' @seealso \code{\link{add.constraint}} #' @examples #' data(edhec) #' ret <- edhec[, 1:4] @@ -650,11 +705,13 @@ #' constructor for turnover_constraint #' -#' This function is called by add.constraint when type="turnover" is specified. see \code{\link{add.constraint}} -#' This function allows the user to specify a target turnover value +#' The turnover constraint specifies a target turnover value. +#' This function is called by add.constraint when type="turnover" is specified, see \code{\link{add.constraint}}. +#' Turnover is calculated from a set of initial weights. #' -#' Note that turnover constraint is currently only supported for global minimum -#' variance problem with ROI quadprog plugin +#' Note that with the RO solvers, turnover constraint is currently only +#' supported for the global minimum variance and quadratic utility problems +#' with ROI quadprog plugin. #' #' @param type character type of the constraint #' @param turnover_target target turnover value @@ -662,6 +719,7 @@ #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify box and/or group constraints #' @author Ross Bennett +#' @seealso \code{\link{add.constraint}} #' @examples #' data(edhec) #' ret <- edhec[, 1:4] @@ -678,7 +736,8 @@ #' constructor for diversification_constraint #' -#' This function is called by add.constraint when type="diversification" is specified, \code{\link{add.constraint}} +#' The diversification constraint specifies a target diversification value. +#' This function is called by add.constraint when type="diversification" is specified, see \code{\link{add.constraint}}. #' #' @param type character type of the constraint #' @param div_target diversification target value @@ -686,6 +745,7 @@ #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify box and/or group constraints #' @author Ross Bennett +#' @seealso \code{\link{add.constraint}} #' @examples #' data(edhec) #' ret <- edhec[, 1:4] @@ -702,6 +762,7 @@ #' constructor for return_constraint #' +#' The return constraint specifes a target mean return value. #' This function is called by add.constraint when type="return" is specified, \code{\link{add.constraint}} #' #' @param type character type of the constraint @@ -710,6 +771,7 @@ #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters #' @author Ross Bennett +#' @seealso \code{\link{add.constraint}} #' @examples #' data(edhec) #' ret <- edhec[, 1:4] @@ -728,6 +790,7 @@ #' #' This function is called by add.constraint when type="position_limit" is specified, \code{\link{add.constraint}} #' Allows the user to specify the maximum number of positions (i.e. number of assets with non-zero weights) +#' as well as the maximum number of long and short positions. #' #' @param type character type of the constraint #' @param max_pos maximum number of assets with non-zero weights @@ -737,13 +800,15 @@ #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify position limit constraints #' @author Ross Bennett -#' #' @examples +#' @seealso \code{\link{add.constraint}} +#' @examples #' data(edhec) #' ret <- edhec[, 1:4] #' #' pspec <- portfolio.spec(assets=colnames(ret)) #' #' pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos=3) +#' pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos_long=3, max_pos_short=1) #' @export position_limit_constraint <- function(type="position_limit", assets, max_pos=NULL, max_pos_long=NULL, max_pos_short=NULL, enabled=TRUE, message=FALSE, ...){ # Get the length of the assets vector @@ -793,6 +858,7 @@ #' Constructor for factor exposure constraint #' +#' The factor exposure constraint sets upper and lower bounds on exposures to risk factors. #' This function is called by add.constraint when type="factor_exposure" is specified. see \code{\link{add.constraint}} #' #' \code{B} can be either a vector or matrix of risk factor exposures (i.e. betas). @@ -816,6 +882,7 @@ #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify risk factor exposure constraints #' @author Ross Bennett +#' @seealso \code{\link{add.constraint}} #' @export factor_exposure_constraint <- function(type="factor_exposure", assets, B, lower, upper, enabled=TRUE, message=FALSE, ...){ # Number of assets Modified: pkg/PortfolioAnalytics/man/add.constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/add.constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/add.constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -15,7 +15,7 @@ 'weight'), 'box', 'group', 'turnover', 'diversification', 'position_limit', 'return', or 'factor_exposure'} - \item{enabled}{TRUE/FALSE} + \item{enabled}{TRUE/FALSE. The default is enabled=TRUE.} \item{message}{TRUE/FALSE. The default is message=FALSE. Display messages if TRUE.} @@ -24,29 +24,97 @@ constraints} \item{indexnum}{if you are updating a specific - constraint, the index number in the $objectives list to + constraint, the index number in the $constraints list to update} } \description{ This is the main function for adding and/or updating - constraints in an object of type \code{\link{portfolio}}. + constraints to the \code{{portfolio}} object. } \details{ - Special cases for the weight_sum constraint are - "full_investment" and "dollar_nuetral" or "active" with - appropriate values set for min_sum and max_sum. see - \code{\link{weight_sum_constraint}} + The following constraint types are supported: \itemize{ + \item{\code{weight_sum}, \code{weight}, \code{leverage}}{ + Specify constraint on the sum of the weights, see + \code{\link{weight_sum_constraint}}} + \item{\code{full_investment}}{ Special case to set + \code{min_sum=1} and \code{max_sum=1} of weight sum + constraints} \item{\code{dollar_neutral}, \code{active}}{ + Special case to set \code{min_sum=0} and \code{max_sum=0} + of weight sum constraints} \item{\code{box}}{ Specify + constraints for the individual asset weights, see + \code{\link{box_constraint}}} \item{\code{long_only}}{ + Special case to set \code{min=0} and \code{max=1} of box + constraints} \item{\code{group}}{ Specify a constraint on + the sum of weights within groups and the number of assets + with non-zero weights in groups, see + \code{\link{group_constraint}}} \item{\code{turnover}}{ + Specify a constraint for target turnover. Turnover is + calculated from a set of initial weights, see + \code{\link{turnover_constraint}}} + \item{\code{diversification}}{ Specify a constraint for + target diversification of a set of weights, see + \code{\link{diversification_constraint}}} + \item{\code{position_limit}}{ Specify a constraint on the + number of positions (i.e. assets with non-zero weights as + well as the number of long and short positions, see + \code{\link{position_limit_constraint}}} + \item{\code{return}}{ Specify a constraint for target + mean return, see \code{\link{return_constraint}}} + \item{\code{factor_exposure}}{ Specify a constraint for + risk factor exposures, see + \code{\link{factor_exposure_constraint}}} } } +\examples{ +data(edhec) +returns <- edhec[, 1:4] +fund.names <- colnames(returns) +pspec <- portfolio.spec(assets=fund.names) +# Add the full investment constraint that specifies the weights must sum to 1. +pspec <- add.constraint(portfolio=pspec, type="weight_sum", min_sum=1, max_sum=1) +# The full investment constraint can also be specified with type="full_investment" +pspec <- add.constraint(portfolio=pspec, type="full_investment") + +# Another common constraint is that portfolio weights sum to 0. +pspec <- add.constraint(portfolio=pspec, type="weight_sum", min_sum=0, max_sum=0) +pspec <- add.constraint(portfolio=pspec, type="dollar_neutral") +pspec <- add.constraint(portfolio=pspec, type="active") + +# Add box constraints +pspec <- add.constraint(portfolio=pspec, type="box", min=0.05, max=0.4) + +min and max can also be specified per asset +pspec <- add.constraint(portfolio=pspec, type="box", min=c(0.05, 0, 0.08, 0.1), max=c(0.4, 0.3, 0.7, 0.55)) +# A special case of box constraints is long only where min=0 and max=1 +# The default action is long only if min and max are not specified +pspec <- add.constraint(portfolio=pspec, type="box") +pspec <- add.constraint(portfolio=pspec, type="long_only") + +# Add group constraints +pspec <- add.constraint(portfolio=pspec, type="group", groups=c(3, 1), group_min=c(0.1, 0.15), group_max=c(0.85, 0.55), group_labels=c("GroupA", "GroupB"), group_pos=c(2, 1)) + +# Add position limit constraint such that we have a maximum number of three assets with non-zero weights. +pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos=3) + +# Add diversification constraint +pspec <- add.constraint(portfolio=pspec, type="diversification", div_target=0.7) + +# Add turnover constraint +pspec <- add.constraint(portfolio=pspec, type="turnover", turnover_target=0.2) + +# Add target mean return constraint +pspec <- add.constraint(portfolio=pspec, type="return", return_target=0.007) +} \author{ Ross Bennett } \seealso{ - \code{\link{constraint_v2}}, \code{\link{weight_sum_constraint}}, \code{\link{box_constraint}}, \code{\link{group_constraint}}, \code{\link{turnover_constraint}}, \code{\link{diversification_constraint}}, - \code{\link{position_limit_constraint}} + \code{\link{position_limit_constraint}, + \code{\link{return_constraint}, + \code{\link{factor_exposure_constraint}} } Modified: pkg/PortfolioAnalytics/man/box_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/box_constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/box_constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -34,8 +34,10 @@ constraints} } \description{ - This function is called by add.constraint when type="box" - is specified. see \code{\link{add.constraint}} + Box constraints specify the upper and lower bounds on the + weights of the assets. This function is called by + add.constraint when type="box" is specified. see + \code{\link{add.constraint}} } \examples{ data(edhec) Modified: pkg/PortfolioAnalytics/man/constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -1,14 +1,10 @@ \name{constraint} \alias{constraint} -\alias{constraint_v2} \title{constructor for class constraint} \usage{ constraint(assets = NULL, ..., min, max, min_mult, max_mult, min_sum = 0.99, max_sum = 1.01, weight_seq = NULL) - - constraint_v2(type, enabled = TRUE, ..., - constrclass = "v2_constraint") } \arguments{ \item{assets}{number of assets, or optionally a named @@ -38,29 +34,14 @@ \item{weight_seq}{seed sequence of weights, see \code{\link{generatesequence}}} - - \item{type}{character type of the constraint to add or - update, currently 'weight_sum', 'box', or 'group'} - - \item{assets}{number of assets, or optionally a named - vector of assets specifying seed weights} - - \item{...}{any other passthru parameters} - - \item{constrclass}{character to name the constraint - class} } \description{ constructor for class constraint - - constructor for class v2_constraint } \examples{ exconstr <- constraint(assets=10, min_sum=1, max_sum=1, min=.01, max=.35, weight_seq=generatesequence()) } \author{ Peter Carl and Brian G. Peterson - - Ross Bennett } Modified: pkg/PortfolioAnalytics/man/constraint_v2.Rd =================================================================== --- pkg/PortfolioAnalytics/man/constraint_v2.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/constraint_v2.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -7,7 +7,7 @@ } \arguments{ \item{type}{character type of the constraint to add or - update, currently 'weight_sum', 'box', or 'group'} + update} \item{assets}{number of assets, or optionally a named vector of assets specifying seed weights} @@ -18,7 +18,8 @@ class} } \description{ - constructor for class v2_constraint + This function is called by the constructor for the + specific constraint. } \author{ Ross Bennett Modified: pkg/PortfolioAnalytics/man/diversification_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/diversification_constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/diversification_constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -19,9 +19,10 @@ and/or group constraints} } \description{ - This function is called by add.constraint when - type="diversification" is specified, - \code{\link{add.constraint}} + The diversification constraint specifies a target + diversification value. This function is called by + add.constraint when type="diversification" is specified, + see \code{\link{add.constraint}}. } \examples{ data(edhec) @@ -34,4 +35,7 @@ \author{ Ross Bennett } +\seealso{ + \code{\link{add.constraint}} +} Modified: pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -29,9 +29,10 @@ risk factor exposure constraints} } \description{ - This function is called by add.constraint when - type="factor_exposure" is specified. see - \code{\link{add.constraint}} + The factor exposure constraint sets upper and lower + bounds on exposures to risk factors. This function is + called by add.constraint when type="factor_exposure" is + specified. see \code{\link{add.constraint}} } \details{ \code{B} can be either a vector or matrix of risk factor @@ -54,4 +55,7 @@ \author{ Ross Bennett } +\seealso{ + \code{\link{add.constraint}} +} Modified: pkg/PortfolioAnalytics/man/group_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/group_constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/group_constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -35,8 +35,10 @@ group constraints} } \description{ - This function is called by add.constraint when - type="group" is specified. see + Group constraints specify the grouping of the assets, + weights of the groups, and number of postions (i.e. + non-zero weights) iof the groups. This function is called + by add.constraint when type="group" is specified. see \code{\link{add.constraint}} } \examples{ Modified: pkg/PortfolioAnalytics/man/position_limit_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/position_limit_constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/position_limit_constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -32,7 +32,8 @@ type="position_limit" is specified, \code{\link{add.constraint}} Allows the user to specify the maximum number of positions (i.e. number of assets - with non-zero weights) + with non-zero weights) as well as the maximum number of + long and short positions. } \examples{ data(edhec) @@ -41,8 +42,12 @@ pspec <- portfolio.spec(assets=colnames(ret)) pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos=3) +pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos_long=3, max_pos_short=1) } \author{ - Ross Bennett #' + Ross Bennett } +\seealso{ + \code{\link{add.constraint}} +} Modified: pkg/PortfolioAnalytics/man/return_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/return_constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/return_constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -18,7 +18,8 @@ \item{\dots}{any other passthru parameters} } \description{ - This function is called by add.constraint when + The return constraint specifes a target mean return + value. This function is called by add.constraint when type="return" is specified, \code{\link{add.constraint}} } \examples{ @@ -32,4 +33,7 @@ \author{ Ross Bennett } +\seealso{ + \code{\link{add.constraint}} +} Modified: pkg/PortfolioAnalytics/man/turnover_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/turnover_constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/turnover_constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -19,15 +19,16 @@ and/or group constraints} } \description{ - This function is called by add.constraint when - type="turnover" is specified. see - \code{\link{add.constraint}} This function allows the - user to specify a target turnover value + The turnover constraint specifies a target turnover + value. This function is called by add.constraint when + type="turnover" is specified, see + \code{\link{add.constraint}}. Turnover is calculated from + a set of initial weights. } \details{ - Note that turnover constraint is currently only supported - for global minimum variance problem with ROI quadprog - plugin + Note that with the RO solvers, turnover constraint is + currently only supported for the global minimum variance + and quadratic utility problems with ROI quadprog plugin. } \examples{ data(edhec) @@ -40,4 +41,7 @@ \author{ Ross Bennett } +\seealso{ + \code{\link{add.constraint}} +} Modified: pkg/PortfolioAnalytics/man/weight_sum_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/weight_sum_constraint.Rd 2013-08-16 16:45:23 UTC (rev 2799) +++ pkg/PortfolioAnalytics/man/weight_sum_constraint.Rd 2013-08-16 17:51:30 UTC (rev 2800) @@ -23,21 +23,21 @@ weight_sum constraints} } \description{ - This function is called by add.constraint when - "weight_sum", "leverage", "full_investment", - "dollar_neutral", or "active" is specified as the type. - see \code{\link{add.constraint}} This function allows the - user to specify the minimum and maximum that the weights - sum to + THe constraint specifies the upper and lower bound that + the weights sum to. This function is called by + add.constraint when "weight_sum", "leverage", + "full_investment", "dollar_neutral", or "active" is + specified as the type. see \code{\link{add.constraint}} } \details{ Special cases for the weight_sum constraint are "full_investment" and "dollar_nuetral" or "active" - If type="full_investment", min_sum=1 and max_sum=1 + If \code{type="full_investment"}, \code{min_sum=1} and + \code{max_sum=1} - If type="dollar_neutral" or type="active", min_sum=0, and - max_sum=0 + If \code{type="dollar_neutral"} or \code{type="active"}, + \code{min_sum=0}, and \code{max_sum=0} } \examples{ data(edhec) @@ -58,4 +58,7 @@ \author{ Ross Bennett } +\seealso{ + \code{\link{add.constraint}} +} From noreply at r-forge.r-project.org Fri Aug 16 20:23:48 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 16 Aug 2013 20:23:48 +0200 (CEST) Subject: [Returnanalytics-commits] r2801 - in pkg/PortfolioAnalytics: R man Message-ID: <20130816182348.5CF66184ABF@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-16 20:23:47 +0200 (Fri, 16 Aug 2013) New Revision: 2801 Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R pkg/PortfolioAnalytics/man/optimize.portfolio.Rd Log: adding documentation for optimize.portfolio Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-16 17:51:30 UTC (rev 2800) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-16 18:23:47 UTC (rev 2801) @@ -505,19 +505,29 @@ #' When using GenSA and want to set \code{verbose=TRUE}, instead use \code{trace}. #' #' The extension to ROI solves a limited type of convex optimization problems: -#' 1) Maxmimize portfolio return subject leverage, box, and/or constraints on weights -#' 2) Minimize portfolio variance subject to leverage, box, and/or group constraints (otherwise known as global minimum variance portfolio) -#' 3) Minimize portfolio variance subject to leverage, box, and/or group constraints and a desired portfolio return -#' 4) Maximize quadratic utility subject to leverage, box, and/or group constraints and risk aversion parameter (this is passed into \code{optimize.portfolio} as as added argument to the \code{constraints} object) -#' 5) Mean CVaR optimization subject to leverage, box, and/or group constraints and target portfolio return +#' \itemize{ +#' \item{Maxmimize portfolio return subject leverage, box, group, position limit, target mean return, and/or factor exposure constraints on weights} +#' \item{Minimize portfolio variance subject to leverage, box, group, and/or factor exposure constraints (otherwise known as global minimum variance portfolio)} +#' \item{Minimize portfolio variance subject to leverage, box, group, and/or factor exposure constraints and a desired portfolio return} +#' \item{Maximize quadratic utility subject to leverage, box, group, target mean return, and/or factor exposure constraints and risk aversion parameter. +#' (The risk aversion parameter is passed into \code{optimize.portfolio} as an added argument to the \code{portfolio} object)} +#' \item{Mean CVaR optimization subject to leverage, box, group, position limit, target mean return, and/or factor exposure constraints and target portfolio return} +#' } #' Lastly, because these convex optimization problem are standardized, there is no need for a penalty term. #' Therefore, the \code{multiplier} argument in \code{\link{add.objective}} passed into the complete constraint object are ingnored by the solver. #' #' If you would like to interface with \code{optimize.portfolio} using matrix formulations, then use \code{ROI_old}. #' +#' An object of class \code{v1_constraint} can be passed in for the \code{constraints} argument. +#' The \code{v1_constraint} object was used in the previous 'v1' specification to specify the +#' constraints and objectives for the optimization problem, see \code{\link{constraint}}. +#' We will attempt to detect if the object passed into the constraints argument +#' is a \code{v1_constraint} object and update to the 'v2' specification by adding the +#' constraints and objectives to the \code{portfolio} object. +#' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns #' @param portfolio an object of type "portfolio" specifying the constraints and objectives for the optimization -#' @param constraints default=NULL, a list of constraint objects +#' @param constraints default=NULL, a list of constraint objects. An object of class ]v1_constraint' can be passed in here. #' @param objectives default=NULL, a list of objective objects #' @param optimize_method one of "DEoptim", "random", "ROI","ROI_old", "pso", "GenSA". For using \code{ROI_old}, need to use a constraint_ROI object in constraints. For using \code{ROI}, pass standard \code{constratint} object in \code{constraints} argument. Presently, ROI has plugins for \code{quadprog} and \code{Rglpk}. #' @param search_size integer, how many portfolios to test, default 20,000 Modified: pkg/PortfolioAnalytics/man/optimize.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/optimize.portfolio.Rd 2013-08-16 17:51:30 UTC (rev 2800) +++ pkg/PortfolioAnalytics/man/optimize.portfolio.Rd 2013-08-16 18:23:47 UTC (rev 2801) @@ -17,7 +17,8 @@ the constraints and objectives for the optimization} \item{constraints}{default=NULL, a list of constraint - objects} + objects. An object of class ]v1_constraint' can be passed + in here.} \item{objectives}{default=NULL, a list of objective objects} @@ -88,27 +89,43 @@ instead use \code{trace}. The extension to ROI solves a limited type of convex - optimization problems: 1) Maxmimize portfolio return - subject leverage, box, and/or constraints on weights 2) - Minimize portfolio variance subject to leverage, box, - and/or group constraints (otherwise known as global - minimum variance portfolio) 3) Minimize portfolio - variance subject to leverage, box, and/or group - constraints and a desired portfolio return 4) Maximize - quadratic utility subject to leverage, box, and/or group - constraints and risk aversion parameter (this is passed - into \code{optimize.portfolio} as as added argument to - the \code{constraints} object) 5) Mean CVaR optimization - subject to leverage, box, and/or group constraints and - target portfolio return Lastly, because these convex - optimization problem are standardized, there is no need - for a penalty term. Therefore, the \code{multiplier} - argument in \code{\link{add.objective}} passed into the - complete constraint object are ingnored by the solver. + optimization problems: \itemize{ \item{Maxmimize + portfolio return subject leverage, box, group, position + limit, target mean return, and/or factor exposure + constraints on weights} \item{Minimize portfolio variance + subject to leverage, box, group, and/or factor exposure + constraints (otherwise known as global minimum variance + portfolio)} \item{Minimize portfolio variance subject to + leverage, box, group, and/or factor exposure constraints + and a desired portfolio return} \item{Maximize quadratic + utility subject to leverage, box, group, target mean + return, and/or factor exposure constraints and risk + aversion parameter. (The risk aversion parameter is + passed into \code{optimize.portfolio} as an added + argument to the \code{portfolio} object)} \item{Mean CVaR + optimization subject to leverage, box, group, position + limit, target mean return, and/or factor exposure + constraints and target portfolio return} } Lastly, + because these convex optimization problem are + standardized, there is no need for a penalty term. + Therefore, the \code{multiplier} argument in + \code{\link{add.objective}} passed into the complete + constraint object are ingnored by the solver. If you would like to interface with \code{optimize.portfolio} using matrix formulations, then use \code{ROI_old}. + + An object of class \code{v1_constraint} can be passed in + for the \code{constraints} argument. The + \code{v1_constraint} object was used in the previous 'v1' + specification to specify the constraints and objectives + for the optimization problem, see + \code{\link{constraint}}. We will attempt to detect if + the object passed into the constraints argument is a + \code{v1_constraint} object and update to the 'v2' + specification by adding the constraints and objectives to + the \code{portfolio} object. } \author{ Kris Boudt, Peter Carl, Brian G. Peterson From noreply at r-forge.r-project.org Sat Aug 17 01:45:22 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 01:45:22 +0200 (CEST) Subject: [Returnanalytics-commits] r2802 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130816234523.02709184AD0@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-17 01:45:22 +0200 (Sat, 17 Aug 2013) New Revision: 2802 Added: pkg/PortfolioAnalytics/sandbox/testing_mean_vs_pamean.R Log: Adding testing script comparing mean and pamean as objective functions. Added: pkg/PortfolioAnalytics/sandbox/testing_mean_vs_pamean.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_mean_vs_pamean.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_mean_vs_pamean.R 2013-08-16 23:45:22 UTC (rev 2802) @@ -0,0 +1,62 @@ +### Load the necessary packages +library(PortfolioAnalytics) + +data(edhec) + +# Drop some indexes and reorder +edhec.R = edhec[,c("Convertible Arbitrage", "Equity Market Neutral","Fixed Income Arbitrage", "Event Driven", "CTA Global", "Global Macro", "Long/Short Equity")] + +# Define pamean function +pamean <- function(n=12, R, weights, geometric=TRUE){ + as.vector(sum(Return.annualized(last(R,n), geometric=geometric)*weights)) +} + +# Define pasd function +pasd <- function(R, weights){ + as.numeric(StdDev(R=R, weights=weights)*sqrt(12)) # hardcoded for monthly data +} + +# Create initial portfolio object used to initialize ALL the bouy portfolios +init.portf <- portfolio.spec(assets=colnames(edhec.R), weight_seq=generatesequence(by=0.005)) +# Add leverage constraint +init.portf <- add.constraint(portfolio=init.portf, type="leverage", min_sum=0.99, max_sum=1.01) +# Add box constraint +init.portf <- add.constraint(portfolio=init.portf, type="box", min=0.05, max=0.3) + + +mean.portf <- add.objective(portfolio=init.portf, + type="return", # the kind of objective this is + name="mean", # name of the function + enabled=TRUE, # enable or disable the objective + multiplier=-1 # calculate it but don't use it in the objective +) + +pamean.portf <- add.objective(portfolio=init.portf, + type="return", # the kind of objective this is + name="pamean", # name of the function + enabled=TRUE, # enable or disable the objective + multiplier=-1, # calculate it but don't use it in the objective + arguments=list(n=60) +) + +pasd.portf <- add.objective(portfolio=init.portf, + type="risk", # the kind of objective this is + name="pasd", # name of the function + enabled=TRUE, # enable or disable the objective + multiplier=1 # calculate it but don't use it in the objective +) + +permutations = 500 +rp = random_portfolios(portfolio=init.portf, permutations=permutations) + +# Takes about 1.4 seconds +mean_time <- system.time({ opt_mean <- optimize.portfolio(R=edhec.R, portfolio=mean.portf, optimize_method="random", rp=rp) }) + +# Takes nearly 71 seconds!!!!! +pamean_time <- system.time({ opt_pamean <- optimize.portfolio(R=edhec.R, portfolio=pamean.portf, optimize_method="random", rp=rp) }) + +pamean_time / mean_time + +# takes about 1.5 seconds +pasd_time <- system.time({ opt_pasd <- optimize.portfolio(R=edhec.R, portfolio=pasd.portf, optimize_method="random", rp=rp) }) +pasd_time From noreply at r-forge.r-project.org Sat Aug 17 06:49:45 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 06:49:45 +0200 (CEST) Subject: [Returnanalytics-commits] r2803 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130817044945.58AF818499E@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-17 06:49:45 +0200 (Sat, 17 Aug 2013) New Revision: 2803 Added: pkg/PortfolioAnalytics/sandbox/testing_rp_opt_script.R Log: adding testing script to rework some examples from script.workshop2012.R. Examples include mean-StdDev, minmETL, and EqmETL problems using random as the optimization method Added: pkg/PortfolioAnalytics/sandbox/testing_rp_opt_script.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_rp_opt_script.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_rp_opt_script.R 2013-08-17 04:49:45 UTC (rev 2803) @@ -0,0 +1,349 @@ +# script to run examples from script.workshop2012.R using the v2 specification + +# The following optimization problems will be run +# mean-StdDev +# - maximize mean-to-volatility (i.e. reward-to-risk) +# - BUOY 1 +# minmETL +# - minimize modified Expected Tail Loss +# - BUOY 4 +# eqmETL +# - equal risk modified Expected Tail Loss +# - BUOY 6 + +# Note: The script.workshop2012.R examples use pamean and pasd, I will simply +# use mean and StdDev. + +# The script is organized in a way such that the examples from the +# script.workshop2012.R (modified to work with the current code base) are shown +# first and then implemented using the v2 specification + +##### script.workshop2012.R: ##### +# v1 code from workshop + +##### v2: ##### +# v2 code + +# Include optimizer and multi-core packages +library(PortfolioAnalytics) +require(quantmod) +require(DEoptim) +require(foreach) + +# The multicore package, and therefore registerDoMC, should not be used in a +# GUI environment, because multiple processes then share the same GUI. Only use +# when running from the command line +require(doMC) +registerDoMC(3) + +data(edhec) + +# Drop some indexes and reorder +edhec.R = edhec[,c("Convertible Arbitrage", "Equity Market Neutral","Fixed Income Arbitrage", "Event Driven", "CTA Global", "Global Macro", "Long/Short Equity")] + +# Define pamean function +# pamean <- function(n=12, R, weights, geometric=TRUE){ +# as.vector(sum(Return.annualized(last(R,n), geometric=geometric)*weights)) +# } + +# Define pasd function +# pasd <- function(R, weights){ +# as.numeric(StdDev(R=R, weights=weights)*sqrt(12)) # hardcoded for monthly data +# } + +# Set some parameters +rebalance_period = 'quarters' # uses endpoints identifiers from xts +clean = "none" #"boudt" +permutations = 4000 + + +###### script.workshop2012: Initial constraint object ##### +## Set up the initial constraints object with constraints and objectives using +## the v1 specification + +# A set of box constraints used to initialize ALL the bouy portfolios +# init.constr <- constraint(assets = colnames(edhec.R), +# min = .05, # minimum position weight +# max = .3, #1, # maximum position weight +# min_sum=0.99, # minimum sum of weights must be equal to 1-ish +# max_sum=1.01, # maximum sum must also be about 1 +# weight_seq = generatesequence(by=.005) +# ) +# Add measure 1, annualized return +# init.constr <- add.objective_v1(constraints=init.constr, +# type="return", # the kind of objective this is +# name="mean", +# enabled=TRUE, # enable or disable the objective +# multiplier=0, # calculate it but don't use it in the objective +# ) +# Add measure 2, annualized standard deviation +# init.constr <- add.objective_v1(init.constr, +# type="risk", # the kind of objective this is +# name="StdDev", # to minimize from the sample +# enabled=TRUE, # enable or disable the objective +# multiplier=0, # calculate it but don't use it in the objective +# ) +# Add measure 3, CVaR with p=(1-1/12) +# set confidence for VaR/ES +# p=1-1/12 # for monthly +#p=.25 # for quarterly +# init.constr <- add.objective_v1(init.constr, +# type="risk", # the kind of objective this is +# name="CVaR", # the function to minimize +# enabled=FALSE, # enable or disable the objective +# multiplier=0, # calculate it but don't use it in the objective +# arguments=list(p=p), +# clean=clean +# ) + +##### v2: Initial Portfolio Object ##### +## Set up an initial portfolio object with constraints and objectives using +## v2 specification + +# Create initial portfolio object used to initialize ALL the bouy portfolios +init.portf <- portfolio.spec(assets=colnames(edhec.R), + weight_seq=generatesequence(by=0.005)) +# Add leverage constraint +init.portf <- add.constraint(portfolio=init.portf, + type="leverage", + min_sum=0.99, + max_sum=1.01) +# Add box constraint +init.portf <- add.constraint(portfolio=init.portf, + type="box", + min=0.05, + max=0.3) + +#Add measure 1, annualized return +init.portf <- add.objective(portfolio=init.portf, + type="return", # the kind of objective this is + name="mean", # name of the function + enabled=TRUE, # enable or disable the objective + multiplier=0 # calculate it but don't use it in the objective +) + +# Add measure 2, annualized standard deviation +init.portf <- add.objective(portfolio=init.portf, + type="risk", # the kind of objective this is + name="StdDev", # to minimize from the sample + enabled=TRUE, # enable or disable the objective + multiplier=0 # calculate it but don't use it in the objective +) + +# Add measure 3, ES with p=(1-1/12) +# set confidence for ES +p=1-1/12 # for monthly + +init.portf <- add.objective(portfolio=init.portf, + type="risk", # the kind of objective this is + name="ES", # the function to minimize + enabled=FALSE, # enable or disable the objective + multiplier=0, # calculate it but don't use it in the objective + arguments=list(p=p) +) +# print(init.portf) +# summary(init.portf) + +##### script.workshop2012: BUOY 1 ##### +### Construct BUOY 1: Constrained Mean-StdDev Portfolio ##### +# MeanSD.constr <- init.constr +# Turn back on the return and sd objectives +# MeanSD.constr$objectives[[1]]$multiplier = -1 # mean +# MeanSD.constr$objectives[[2]]$multiplier = 1 # StdDev + +##### v2: BUOY 1 ##### +### Construct BUOY 1: Constrained Mean-StdDev Portfolio +MeanSD.portf <- init.portf +# Turn back on the return and sd objectives +MeanSD.portf$objectives[[1]]$multiplier = -1 # pamean +MeanSD.portf$objectives[[2]]$multiplier = 1 # pasd +# print(MeanSD.portf) +# summary(MeanSD.portf) + +##### script.workshop2012: BUOY 4 ##### +### Construct BUOY 4: Constrained Minimum mETL Portfolio +# MinmETL.constr <- init.constr +# Turn back on the mETL objective +# MinmETL.constr$objectives[[3]]$multiplier = 1 # mETL +# MinmETL.constr$objectives[[3]]$enabled = TRUE # mETL + +##### v2: BUOY 4 ##### +### Construct BUOY 4: Constrained Minimum mETL Portfolio +MinmETL.portf <- init.portf +# Turn back on the mETL objective +MinmETL.portf$objectives[[3]]$multiplier = 1 # mETL +MinmETL.portf$objectives[[3]]$enabled = TRUE # mETL +# print(MinmETL.portf) +# summary(MinmETL.portf) + +##### script.workshop2012: BUOY 6 ##### +### Construct BUOY 6: Constrained Equal mETL Contribution Portfolio +# EqmETL.constr <- add.objective_v1(init.constr, +# type="risk_budget", +# name="CVaR", +# enabled=TRUE, +# min_concentration=TRUE, +# arguments = list(p=(1-1/12), clean=clean)) +# EqmETL.constr$objectives[[3]]$multiplier = 1 # min mETL +# EqmETL.constr$objectives[[3]]$enabled = TRUE # min mETL + +##### v2: BUOY 6 ##### +### Construct BUOY 6: Constrained Equal mETL Contribution Portfolio +EqmETL.portf <- add.objective(init.portf, + type="risk_budget", + name="ES", + enabled=TRUE, + min_concentration=TRUE, + arguments = list(p=(1-1/12), clean=clean) +) +EqmETL.portf$objectives[[3]]$multiplier = 1 # min mETL +EqmETL.portf$objectives[[3]]$enabled = TRUE # min mETL +# print(EqmETL.portf) +# summary(EqmETL.portf) + +### Choose our 'R' variable +R=edhec.R # for monthlies + +# Generate a single set of random portfolios to evaluate against all constraint set +print(paste('constructing random portfolios at',Sys.time())) +rp = random_portfolios(portfolio=init.portf, permutations=permutations) +print(paste('done constructing random portfolios at',Sys.time())) + +start_time<-Sys.time() +print(paste('Starting optimization at',Sys.time())) + +##### script.workshop2012.R: Evaluate BUOY 1 ##### +### Evaluate BUOY 1: Constrained Mean-StdDev Portfolio +# MeanSD.RND<-optimize.portfolio_v1(R=R, +# constraints=MeanSD.constr, +# optimize_method='random', +# search_size=1000, trace=TRUE, verbose=TRUE, +# rp=rp) # use the same random portfolios generated above +# plot(MeanSD.RND, risk.col="StdDev", return.col="mean") +# Evaluate the objectives through time +### requires PortfolioAnalytics build >= 1864 +# MeanSD.RND.t = optimize.portfolio.rebalancing_v1(R=R, +# constraints=MeanSD.constr, +# optimize_method='random', +# search_size=permutations, trace=TRUE, verbose=TRUE, +# rp=rp, # all the same as prior +# rebalance_on=rebalance_period, # uses xts 'endpoints' +# trailing_periods=NULL, # calculates from inception +# training_period=36) # starts 3 years in to the data history +# MeanSD.w = extractWeights.rebal(MeanSD.RND.t) +# MeanSD=Return.rebalancing(edhec.R, MeanSD.w) +# colnames(MeanSD) = "MeanSD" + +##### v2: Evaluate BUOY 1 ##### +MeanSD.RND <- optimize.portfolio(R=R, + portfolio=MeanSD.portf, + optimize_method="random", + trace=TRUE, + rp=rp) +print(MeanSD.RND) +print(MeanSD.RND$elapsed_time) + +# Evaluate the objectives with RP through time +# MeanSD.RND.t <- optimize.portfolio.rebalancing(R=R, +# portfolio=MeanSD.portf, +# optimize_method="random", +# trace=TRUE, +# rp=rp, +# rebalance_on=rebalance_period, +# training_period=36) +# MeanSD.w = extractWeights.rebal(MeanSD.RND.t) +# MeanSD=Return.rebalancing(edhec.R, MeanSD.w) +# colnames(MeanSD) = "MeanSD" +# save(MeanSD.RND, MeanSD.RND.t, MeanSD.w, MeanSD, file=paste('MeanSD',Sys.Date(),'rda',sep='.')) + +print(paste('Completed meanSD optimization at',Sys.time(),'moving on to MinmETL')) + +##### script.workshop2012.R: Evaluate BUOY 4 ##### +### Evaluate BUOY 4: Constrained Minimum mETL Portfolio +# MinmETL.RND<-optimize.portfolio_v1(R=R, +# constraints=MinmETL.constr, +# optimize_method='random', +# search_size=1000, trace=TRUE, verbose=TRUE, +# rp=rp) # use the same random portfolios generated above +# plot(MinmETL.RND, risk.col="StdDev", return.col="mean") +# Evaluate the objectives with RP through time +# MinmETL.RND.t = optimize.portfolio.rebalancing_v1(R=R, +# constraints=MinmETL.constr, +# optimize_method='random', +# search_size=permutations, trace=TRUE, verbose=TRUE, +# rp=rp, # all the same as prior +# rebalance_on=rebalance_period, # uses xts 'endpoints' +# trailing_periods=NULL, # calculates from inception +# training_period=36) # starts 3 years in to the data history +# MinmETL.w = extractWeights.rebal(MinmETL.RND.t) +# MinmETL=Return.rebalancing(edhec.R, MinmETL.w) +# colnames(MinmETL) = "MinmETL" + +##### v2: Evaluate BUOY 4 ##### +MinmETL.RND <- optimize.portfolio(R=R, + portfolio=MinmETL.portf, + optimize_method="random", + trace=TRUE, + rp=rp) +print(MinmETL.RND) +print(MinmETL.RND$elapsed_time) + +# Evaluate the objectives with RP through time +# MinmETL.RND.t <- optimize.portfolio.rebalancing(R=R, +# portfolio=MinmETL.portf, +# optimize_method="random", +# trace=TRUE, +# rp=rp, +# rebalance_on=rebalance_period, +# training_period=36) +# MinmETL.w = extractWeights.rebal(MinmETL.RND.t) +# MinmETL=Return.rebalancing(edhec.R, MinmETL.w) +# colnames(MinmETL) = "MinmETL" +# save(MinmETL.RND, MinmETL.RND.t, MinmETL.w, MinmETL,file=paste('MinmETL',Sys.Date(),'rda',sep='.')) + +print(paste('Completed MinmETL optimization at',Sys.time(),'moving on to EqmETL')) + +##### script.workshop2012.R: Evaluate BUOY 6 ##### +### Evaluate BUOY 6: Constrained Equal mETL Contribution Portfolio +# EqmETL.RND<-optimize.portfolio_v1(R=R, +# constraints=EqmETL.constr, +# optimize_method='random', +# search_size=1000, trace=TRUE, verbose=TRUE, +# rp=rp) # use the same random portfolios generated above +# EqmETL.RND.t = optimize.portfolio.rebalancing_v1(R=R, +# constraints=EqmETL.constr, +# optimize_method='random', +# search_size=permutations, trace=TRUE, verbose=TRUE, +# rp=rp, # all the same as prior +# rebalance_on=rebalance_period, # uses xts 'endpoints' +# trailing_periods=NULL, # calculates from inception +# training_period=36) # starts 3 years in to the data history +# EqmETL.w = extractWeights.rebal(EqmETL.RND.t) +# EqmETL=Return.rebalancing(edhec.R, EqmETL.w) +# colnames(EqmETL) = "EqmETL" + +##### v2: Evaluate BUOY 6 ##### +EqmETL.RND <- optimize.portfolio(R=R, + portfolio=EqmETL.portf, + optimize_method="random", + trace=TRUE, + rp=rp) +print(EqmETL.RND) +print(EqmETL.RND$elapsed_time) + +# Evaluate the objectives with RP through time +# EqmETL.RND.t <- optimize.portfolio.rebalancing(R=R, +# portfolio=EqmETL.portf, +# optimize_method="random", +# trace=TRUE, +# rp=rp, +# rebalance_on=rebalance_period, +# training_period=36) +# EqmETL.w = extractWeights.rebal(EqmETL.RND.t) +# EqmETL=Return.rebalancing(edhec.R, EqmETL.w) +# colnames(EqmETL) = "EqmETL" +# save(EqmETL.RND, EqmETL.RND.t, EqmETL.w, EqmETL, file=paste('EqmETL',Sys.Date(),'rda',sep='.')) + +end_time<-Sys.time() +print("Optimization Complete") +print(end_time-start_time) From noreply at r-forge.r-project.org Sat Aug 17 13:39:33 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 13:39:33 +0200 (CEST) Subject: [Returnanalytics-commits] r2804 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . R Week1/Code man Message-ID: <20130817113933.87F0D1853D9@r-forge.r-project.org> Author: shubhanm Date: 2013-08-17 13:39:33 +0200 (Sat, 17 Aug 2013) New Revision: 2804 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd Log: /.Rd files Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-17 04:49:45 UTC (rev 2803) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-17 11:39:33 UTC (rev 2804) @@ -4,7 +4,6 @@ export(EMaxDDGBM) export(GLMSmoothIndex) export(QP.Norm) -export(Return.GLM) export(SterlingRatio.Normalized) export(table.ComparitiveReturn.GLM) export(table.EMaxDDGBM) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R 2013-08-17 04:49:45 UTC (rev 2803) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R 2013-08-17 11:39:33 UTC (rev 2804) @@ -1,40 +1,29 @@ -#' Getmansky Lo Markov Unsmooth Return Model -#' -#' -#' True returns represent the flow of information that would determine the equilibrium +#' @title GLM Return Model +#' @description True returns represent the flow of information that would determine the equilibrium #' value of the fund's securities in a frictionless market. However, true economic -#' returns are not observed. Instead, Rot -#' denotes the reported or observed return in -#' period t, which is a weighted average of the fund's true returns over the most recent k ? 1 -#' periods, includingthe current period. -#' This averaging process captures the essence of smoothed returns in several -#' respects. From the perspective of illiquidity-driven smoothing, is consistent -#' with several models in the nonsynchronous tradingliterat ure. For example, Cohen -#' et al. (1 986, Chapter 6.1) propose a similar weighted-average model for observed -#' returns. -#' -#' The Geltner autocorrelation adjusted return series may be calculated via: -#' -#' @param Ra an xts, vector, matrix, data frame, timeSeries or zoo object of +#' returns are not observed. The returns to hedge funds and other alternative investments are often +#' highly serially correlated.We propose an econometric model of return smoothingand develop estimators for the smoothing +#' pro???le as well as a smoothing-adjusted Sharpe ratio. +#' @usage +#' Return.GLM(edhec,4) +#' @usage +#' Return.GLM(edhec,4) +#' @param +#' Ra : an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns - -#' @param q order of autocorrelation coefficient -#' @author R -#' @references "An econometric model of serial correlation and -#' illiquidity in hedge fund returns -#' Mila Getmansky1, Andrew W. Lo*, Igor Makarov -#' MIT Sloan School of Management, 50 Memorial Drive, E52-432, Cambridge, MA 02142-1347, USA -#' Received 16 October 2002; received in revised form 7 March 2003; accepted 15 May 2003 -#' Available online 10 July 2004 -#' -#' +#' @param +#' q : order of autocorrelation coefficient lag factors +#' +#' @details +#' To quantify the impact of all of these possible sources of serial correlation, denote by R(t) +#' the true economic return of a hedge fund in period 't'; and let R(t) satisfy the following linear +#' single-factor model: where: +#' \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} +#' where \eqn{\theta}'i is defined as the weighted lag of autocorrelated lag and whose sum is 1. +#' @author Brian Peterson,Peter Carl, Shubhankit Mohan +#' @references Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An econometric model of serial correlation and +#' and illiquidity in hedge fund Returns},Journal of Financial Economics 74 (2004). #' @keywords ts multivariate distribution models -#' @examples -#' -#' data(edhec) -#' Return.GLM(edhec,4) -#' -#' @export Return.GLM <- function (Ra,q=3) { # @author Brian G. Peterson, Peter Carl @@ -82,6 +71,6 @@ # This R package is distributed under the terms of the GNU Public License (GPL) # for full details see the file COPYING # -# $Id: Return.GLM.R 2163 2012-07-16 00:30:19Z braverock $ +# $Id: Return.GLM.R 2334 2013-04-01 16:57:25Z braverock $ # ############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R 2013-08-17 04:49:45 UTC (rev 2803) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Week1/Code/GLMSmoothIndex.R 2013-08-17 11:39:33 UTC (rev 2804) @@ -1,6 +1,6 @@ -#'@title Getmansky Lo Markov Smoothing Index Parameter -#'@description -#'A useful summary statistic for measuring the concentration of weights is +#' @title Getmansky Lo Markov Smoothing Index Parameter +#' @description +#' A useful summary statistic for measuring the concentration of weights is #' a sum of square of Moving Average lag coefficient. #' This measure is well known in the industrial organization literature as the #' Herfindahl index, a measure of the concentration of firms in a given industry. @@ -11,7 +11,7 @@ #' \deqn{ R_t = {\mu} + {\beta}{{\delta}}_t+ \xi_t} #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns -#' @author R +#' @author Peter Carl #' @aliases Return.Geltner #' @references "An econometric model of serial correlation and illiquidity in #' hedge fund returns" Mila Getmansky1, Andrew W. Lo*, Igor Makarov Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-17 04:49:45 UTC (rev 2803) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-17 11:39:33 UTC (rev 2804) @@ -1,49 +1,44 @@ \name{Return.GLM} \alias{Return.GLM} -\title{Getmansky Lo Markov Unsmooth Return Model} +\title{GLM Return Model} \usage{ - Return.GLM(Ra, q = 3) + Return.GLM(edhec,4) } \arguments{ - \item{Ra}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} + \item{Ra}{: an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns} - \item{q}{order of autocorrelation coefficient} + \item{q}{: order of autocorrelation coefficient lag + factors} } \description{ True returns represent the flow of information that would determine the equilibrium value of the fund's securities in a frictionless market. However, true economic returns - are not observed. Instead, Rot denotes the reported or - observed return in period t, which is a weighted average - of the fund's true returns over the most recent k ? 1 - periods, includingthe current period. This averaging - process captures the essence of smoothed returns in - several respects. From the perspective of - illiquidity-driven smoothing, is consistent with several - models in the nonsynchronous tradingliterat ure. For - example, Cohen et al. (1 986, Chapter 6.1) propose a - similar weighted-average model for observed returns. + are not observed. The returns to hedge funds and other + alternative investments are often highly serially + correlated.We propose an econometric model of return + smoothingand develop estimators for the smoothing + pro???le as well as a smoothing-adjusted Sharpe ratio. } \details{ - The Geltner autocorrelation adjusted return series may be - calculated via: + To quantify the impact of all of these possible sources + of serial correlation, denote by R(t) the true economic + return of a hedge fund in period 't'; and let R(t) + satisfy the following linear single-factor model: where: + \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + + \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} where + \eqn{\theta} is defined as the weighted lag of + autocorrelated lag and whose sum is 1. } -\examples{ -data(edhec) -Return.GLM(edhec,4) -} \author{ - R + Brian Peterson,Peter Carl, Shubhankit Mohan } \references{ - "An econometric model of serial correlation and - illiquidity in hedge fund returns Mila Getmansky1, Andrew - W. Lo*, Igor Makarov MIT Sloan School of Management, 50 - Memorial Drive, E52-432, Cambridge, MA 02142-1347, USA - Received 16 October 2002; received in revised form 7 - March 2003; accepted 15 May 2003 Available online 10 July - 2004 + Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An + econometric model of serial correlation and and + illiquidity in hedge fund Returns},Journal of Financial + Economics 74 (2004). } \keyword{distribution} \keyword{models} From noreply at r-forge.r-project.org Sat Aug 17 16:34:11 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 16:34:11 +0200 (CEST) Subject: [Returnanalytics-commits] r2805 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: R man Message-ID: <20130817143411.409921806EB@r-forge.r-project.org> Author: shubhanm Date: 2013-08-17 16:34:10 +0200 (Sat, 17 Aug 2013) New Revision: 2805 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.Autocorrelation.Rd Log: /.Rd Documentation Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R 2013-08-17 11:39:33 UTC (rev 2804) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R 2013-08-17 14:34:10 UTC (rev 2805) @@ -1,4 +1,4 @@ -#' calculate a multiperiod or annualized Autocorrleation adjusted Standard Deviation +#' @title Autocorrleation adjusted Standard Deviation #' #' @aliases sd.multiperiod sd.annualized StdDev.annualized #' @param x an xts, vector, matrix, data frame, timeSeries or zoo object of @@ -7,18 +7,17 @@ #' @param scale number of periods in a year (daily scale = 252, monthly scale = #' 12, quarterly scale = 4) #' @param \dots any other passthru parameters -#' @author R +#' @author Peter Carl,Brian Peterson, Shubhankit Mohan #' @seealso \code{\link[stats]{sd}} \cr #' \url{http://wikipedia.org/wiki/inverse-square_law} #' @references Burghardt, G., and L. Liu, \emph{ It's the Autocorrelation, Stupid (November 2012) Newedge -#' working paper.http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf \cr +#' working paper.} +#' \code{\link[stats]{}} \cr +#' \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} #' @keywords ts multivariate distribution models -#' @examples +#' @usage ACsd.annualized(edhec,3) +#' #' -#' data(edhec) -#' ACsd.annualized(edhec,3) - -#' #' @export #' @rdname ACStdDev.annualized ACStdDev.annualized <- ACsd.annualized <- ACsd.multiperiod <- Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R 2013-08-17 11:39:33 UTC (rev 2804) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R 2013-08-17 14:34:10 UTC (rev 2805) @@ -1,13 +1,9 @@ -#' Expected Drawdown using Brownian Motion Assumptions +#' @title Expected Drawdown using Brownian Motion Assumptions #' -#' Works on the model specified by Maddon-Ismail -#' -#' -#' -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of -#' asset returns - -#' @author R +#' @description Works on the model specified by Maddon-Ismail which investigates the behavior of this statistic for a Brownian motion +#' with drift. +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @keywords Expected Drawdown Using Brownian Motion Assumptions #' @rdname EmaxDDGBM #' @export Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R 2013-08-17 11:39:33 UTC (rev 2804) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R 2013-08-17 14:34:10 UTC (rev 2805) @@ -1,17 +1,20 @@ -#' Stacked Bar Plot of Autocorrelation Lag Coefficients +#' @title Stacked Bar Autocorrelation Plot #' -#' A wrapper to create box and whiskers plot of comparitive inputs +#' @description A wrapper to create box and whiskers plot of comparitive inputs #' -#' We have also provided controls for all the symbols and lines in the chart. +#' @details We have also provided controls for all the symbols and lines in the chart. #' One default, set by \code{as.Tufte=TRUE}, will strip chartjunk and draw a #' Boxplot per recommendations by Burghardt, Duncan and Liu(2013) #' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' an asset return #' @return Stack Bar plot of lagged return coefficients -#' @author R +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @seealso \code{\link[graphics]{boxplot}} -#' @references Burghardt, Duncan and Liu(2013) \emph{It's the autocorrelation, stupid}. AlternativeEdge Note November, 2012 } +#' @references Burghardt, G., and L. Liu, \emph{ It's the Autocorrelation, Stupid (November 2012) Newedge +#' working paper.} +#' \code{\link[stats]{}} \cr +#' \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} #' @keywords Autocorrelation lag factors #' @examples #' @@ -28,7 +31,7 @@ # A wrapper to create box and whiskers plot, of autocorrelation lag coeffiecients # of the First six factors - R = checkData(R, method="xts") + # R = checkData(R, method="xts") # Graph autos with adjacent bars using rainbow colors Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd 2013-08-17 11:39:33 UTC (rev 2804) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd 2013-08-17 14:34:10 UTC (rev 2805) @@ -3,9 +3,9 @@ \alias{sd.annualized} \alias{sd.multiperiod} \alias{StdDev.annualized} -\title{calculate a multiperiod or annualized Autocorrleation adjusted Standard Deviation} +\title{Autocorrleation adjusted Standard Deviation} \usage{ - ACStdDev.annualized(R, lag = 6, scale = NA, ...) + ACsd.annualized(edhec,3) } \arguments{ \item{x}{an xts, vector, matrix, data frame, timeSeries @@ -20,21 +20,16 @@ \item{\dots}{any other passthru parameters} } \description{ - calculate a multiperiod or annualized Autocorrleation - adjusted Standard Deviation + Autocorrleation adjusted Standard Deviation } -\examples{ -data(edhec) - ACsd.annualized(edhec,3) -} \author{ - R + Peter Carl,Brian Peterson, Shubhankit Mohan } \references{ Burghardt, G., and L. Liu, \emph{ It's the Autocorrelation, Stupid (November 2012) Newedge working - paper.http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf - \cr + paper.} \code{\link[stats]{}} \cr + \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} } \seealso{ \code{\link[stats]{sd}} \cr Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-17 11:39:33 UTC (rev 2804) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-17 14:34:10 UTC (rev 2805) @@ -28,7 +28,7 @@ satisfy the following linear single-factor model: where: \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} where - \eqn{\theta} is defined as the weighted lag of + \eqn{\theta}'i is defined as the weighted lag of autocorrelated lag and whose sum is 1. } \author{ Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.Autocorrelation.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.Autocorrelation.Rd 2013-08-17 11:39:33 UTC (rev 2804) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.Autocorrelation.Rd 2013-08-17 14:34:10 UTC (rev 2805) @@ -1,6 +1,6 @@ \name{chart.Autocorrelation} \alias{chart.Autocorrelation} -\title{Stacked Bar Plot of Autocorrelation Lag Coefficients} +\title{Stacked Bar Autocorrelation Plot} \usage{ chart.Autocorrelation(R, ...) } @@ -27,12 +27,13 @@ chart.Autocorrelation(edhec[,1]) } \author{ - R + Peter Carl, Brian Peterson, Shubhankit Mohan } \references{ - Burghardt, Duncan and Liu(2013) \emph{It's the - autocorrelation, stupid}. AlternativeEdge Note November, - 2012 } + Burghardt, G., and L. Liu, \emph{ It's the + Autocorrelation, Stupid (November 2012) Newedge working + paper.} \code{\link[stats]{}} \cr + \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} } \seealso{ \code{\link[graphics]{boxplot}} From noreply at r-forge.r-project.org Sat Aug 17 18:39:52 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 18:39:52 +0200 (CEST) Subject: [Returnanalytics-commits] r2806 - pkg/PortfolioAnalytics/R Message-ID: <20130817163952.511421854F0@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-17 18:39:51 +0200 (Sat, 17 Aug 2013) New Revision: 2806 Modified: pkg/PortfolioAnalytics/R/generics.R Log: fixing print and summary methods to show better output for objects with risk_budget objectives Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-17 14:34:10 UTC (rev 2805) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-17 16:39:51 UTC (rev 2806) @@ -230,14 +230,26 @@ print.default(object$weights, digits=digits) cat("\n") - # get objective measure + # get objective measures objective_measures <- object$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measures:\n") - for(i in 1:length(tmp_obj)){ - print(tmp_obj[i], digits=digits) + for(i in 1:length(objective_measures)){ + print(tmp_obj[i], digits=4) cat("\n") + if(length(objective_measures[[i]]) > 1){ + # This will be the case for any objective measures with risk budgets + for(j in 2:length(objective_measures[[i]])){ + tmpl <- objective_measures[[i]][j] + cat(names(tmpl), ":\n") + tmpv <- unlist(tmpl) + names(tmpv) <- names(object$weights) + print(tmpv) + cat("\n") + } + } + cat("\n") } cat("\n") } @@ -263,14 +275,26 @@ print.default(object$weights, digits=digits) cat("\n") - # get objective measure + # get objective measures objective_measures <- object$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measures:\n") - for(i in 1:length(tmp_obj)){ - print(tmp_obj[i], digits=digits) + for(i in 1:length(objective_measures)){ + print(tmp_obj[i], digits=4) cat("\n") + if(length(objective_measures[[i]]) > 1){ + # This will be the case for any objective measures with risk budgets + for(j in 2:length(objective_measures[[i]])){ + tmpl <- objective_measures[[i]][j] + cat(names(tmpl), ":\n") + tmpv <- unlist(tmpl) + names(tmpv) <- names(object$weights) + print(tmpv) + cat("\n") + } + } + cat("\n") } cat("\n") } @@ -296,14 +320,26 @@ print.default(object$weights, digits=digits) cat("\n") - # get objective measure + # get objective measures objective_measures <- object$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measures:\n") - for(i in 1:length(tmp_obj)){ - print(tmp_obj[i], digits=digits) + for(i in 1:length(objective_measures)){ + print(tmp_obj[i], digits=4) cat("\n") + if(length(objective_measures[[i]]) > 1){ + # This will be the case for any objective measures with risk budgets + for(j in 2:length(objective_measures[[i]])){ + tmpl <- objective_measures[[i]][j] + cat(names(tmpl), ":\n") + tmpv <- unlist(tmpl) + names(tmpv) <- names(object$weights) + print(tmpv) + cat("\n") + } + } + cat("\n") } cat("\n") } @@ -329,15 +365,26 @@ print.default(object$weights, digits=digits) cat("\n") - # get objective measure - # get objective measure + # get objective measures objective_measures <- object$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measures:\n") - for(i in 1:length(tmp_obj)){ - print(tmp_obj[i], digits=digits) + for(i in 1:length(objective_measures)){ + print(tmp_obj[i], digits=4) cat("\n") + if(length(objective_measures[[i]]) > 1){ + # This will be the case for any objective measures with risk budgets + for(j in 2:length(objective_measures[[i]])){ + tmpl <- objective_measures[[i]][j] + cat(names(tmpl), ":\n") + tmpv <- unlist(tmpl) + names(tmpv) <- names(object$weights) + print(tmpv) + cat("\n") + } + } + cat("\n") } cat("\n") } @@ -368,13 +415,25 @@ # The objective measure is object$out for ROI cat("Objective Measures:\n") if(!is.null(object$objective_measures)){ - # get objective measure + # get objective measures objective_measures <- object$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) - for(i in 1:length(tmp_obj)){ - print(tmp_obj[i]) + for(i in 1:length(objective_measures)){ + print(tmp_obj[i], digits=4) cat("\n") + if(length(objective_measures[[i]]) > 1){ + # This will be the case for any objective measures with risk budgets + for(j in 2:length(objective_measures[[i]])){ + tmpl <- objective_measures[[i]][j] + cat(names(tmpl), ":\n") + tmpv <- unlist(tmpl) + names(tmpv) <- names(object$weights) + print(tmpv) + cat("\n") + } + } + cat("\n") } } else { print(as.numeric(object$out)) From noreply at r-forge.r-project.org Sat Aug 17 20:01:03 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 20:01:03 +0200 (CEST) Subject: [Returnanalytics-commits] r2807 - pkg/PortfolioAnalytics/R Message-ID: <20130817180103.781DE18491A@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-17 20:01:02 +0200 (Sat, 17 Aug 2013) New Revision: 2807 Modified: pkg/PortfolioAnalytics/R/extractstats.R Log: adding function to extract the $objective_measures slot Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-17 16:39:51 UTC (rev 2806) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-17 18:01:02 UTC (rev 2807) @@ -333,3 +333,26 @@ names(result) <- rnames return(result) } + +#' Extract the objective measures +#' +#' This function will extract the objective measures from the optimal portfolio +#' run via \code{optimize.portfolio} +#' +#' @param object list returned by optimize.portfolio +#' @return list of objective measures +#' @seealso \code{\link{optimize.portfolio}} +#' @author Ross Bennett +#' @export +extractObjectiveMeasures <- function(object){ + if(!inherits(object, "optimize.portfolio")) stop("object must be of class 'optimize.portfolio'") + if(inherits(object, "optimize.portfolio.ROI")){ + # objective measures returned as $out for ROI solvers + out <- object$out + } else { + # objective measures returned as $objective_measures for all other solvers + out <- object$objective_measures + } + return(out) +} + From noreply at r-forge.r-project.org Sat Aug 17 23:18:06 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 23:18:06 +0200 (CEST) Subject: [Returnanalytics-commits] r2808 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man Message-ID: <20130817211806.D7E5018491A@r-forge.r-project.org> Author: braverock Date: 2013-08-17 23:18:06 +0200 (Sat, 17 Aug 2013) New Revision: 2808 Added: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/BenchmarkSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/DrawdownGPD.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.SRIndifference.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/golden_section.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/rollDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/table.PSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/table.Penance.Rd Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R Log: - add DESCRIPTION, NAMESPACE - additional changes so that roxygenize could work on the entire package - additional changes so that R CMD build would work Added: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-17 21:18:06 UTC (rev 2808) @@ -0,0 +1,46 @@ +Package: noniid.pm +Type: Package +Title: Non-i.i.d. GSoC 2013 Pulkit +Version: 0.1 +Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ +Author: Pulkit Mahotra +Contributors: Peter Carl, Brian G. Peterson +Depends: + xts, + PerformanceAnalytics +Suggests: + PortfolioAnalytics +Maintainer: Brian G. Peterson +Description: GSoC 2013 project to replicate literature on drawdowns and + non-i.i.d assumptions in finance. +License: GPL (>=3) +ByteCompile: TRUE +Collate: + 'BenchmarkPlots.R' + 'BenchmarkSR.R' + 'CDaRMultipath.R' + 'CdaR.R' + 'chart.Penance.R' + 'chart.REDD.R' + 'chart.SharpeEfficient.R' + 'Drawdownalpha.R' + 'DrawdownBetaMulti.R' + 'DrawdownBeta.R' + 'EDDCOPS.R' + 'edd.R' + 'Edd.R' + 'ExtremeDrawdown.R' + 'GoldenSection.R' + 'MaxDD.R' + 'MinTRL.R' + 'MonteSimulTriplePenance.R' + 'ProbSharpeRatio.R' + 'PSRopt.R' + 'REDDCOPS.R' + 'redd.R' + 'REM.R' + 'SRIndifferenceCurve.R' + 'table.Penance.R' + 'table.PSR.R' + 'TriplePenance.R' + 'TuW.R' Added: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-17 21:18:06 UTC (rev 2808) @@ -0,0 +1,10 @@ +export(AlphaDrawdown) +export(BenchmarkSR) +export(chart.BenchmarkSR) +export(chart.SRIndifference) +export(EconomicDrawdown) +export(EDDCOPS) +export(MinTrackRecord) +export(REDDCOPS) +export(rollDrawdown) +export(rollEconomicMax) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R 2013-08-17 18:01:02 UTC (rev 2807) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R 2013-08-17 21:18:06 UTC (rev 2808) @@ -8,7 +8,6 @@ #' average pairwise correlation. The Returns are given as #' the input with the benchmark Sharpe Ratio as the output. #' -#'@aliases BenchmarkSR #'\deqn{SR_B = \bar{SR}\sqrt{\frac{S}{1+(S-1)\bar{\rho}}}} #' #'Here \eqn{\bar{SR}} is the average SR of the portfolio and \eqn{\bar{\rho}} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-17 18:01:02 UTC (rev 2807) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-17 21:18:06 UTC (rev 2808) @@ -1,7 +1,7 @@ #'@title #'Conditional Drawdown at Risk for Multiple Sample Path #' -#'@desctipion +#'@description #' #' For a given \eqn{\alpha \epsilon [0,1]} in the multiple sample-paths setting,CDaR, #' denoted by \eqn{D_{\alpha}(w)}, is the average of \eqn{(1-\alpha).100\%} drawdowns Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R 2013-08-17 18:01:02 UTC (rev 2807) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R 2013-08-17 21:18:06 UTC (rev 2808) @@ -15,7 +15,7 @@ #' \eqn{f(x_2) Author: braverock Date: 2013-08-17 23:20:42 +0200 (Sat, 17 Aug 2013) New Revision: 2809 Added: pkg/PerformanceAnalytics/sandbox/pulkit/.Rbuildignore Log: - add .Rbuildignore so that the week* dirs aren't in R CMD build Added: pkg/PerformanceAnalytics/sandbox/pulkit/.Rbuildignore =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/.Rbuildignore (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/.Rbuildignore 2013-08-17 21:20:42 UTC (rev 2809) @@ -0,0 +1,5 @@ +sandbox +generatechangelog\.sh +ChangeLog\.1\.0\.0 +week* +Week* From noreply at r-forge.r-project.org Sat Aug 17 23:40:48 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 23:40:48 +0200 (CEST) Subject: [Returnanalytics-commits] r2810 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . R man Message-ID: <20130817214048.AF73D18491A@r-forge.r-project.org> Author: braverock Date: 2013-08-17 23:40:48 +0200 (Sat, 17 Aug 2013) New Revision: 2810 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rbuildignore pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EMaxDDGBM.Rd Removed: pkg/PerformanceAnalytics/sandbox/Shubhankit/Shubhankit/ Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd Log: - add DESCRIPTION, NAMESPACE, .Rbuildignore - changes to allow roxygenize to work - changes to allow R CMD build to work Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rbuildignore =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rbuildignore (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rbuildignore 2013-08-17 21:40:48 UTC (rev 2810) @@ -0,0 +1,5 @@ +sandbox +generatechangelog\.sh +ChangeLog\.1\.0\.0 +week* +Week* Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-17 21:20:42 UTC (rev 2809) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-17 21:40:48 UTC (rev 2810) @@ -1,53 +1,32 @@ -Package: Shubhankit -Type: Package -Title: Econometric tools for performance and risk analysis. -Version: 1.1.0 -Date: $Date: 2013-01-29 21:04:00 +0800 (Tue, 29 Jan 2013) $ -Author: Peter Carl, Brian G. Peterson -Maintainer: Brian G. Peterson -Description: Collection of econometric functions for - performance and risk analysis. This package aims to aid - practitioners and researchers in utilizing the latest - research in analysis of non-normal return streams. In - general, it is most tested on return (rather than - price) data on a regular scale, but most functions will - work with irregular return data as well, and increasing - numbers of functions will work with P&L or price data - where possible. -Depends: - R (>= 2.14.0), - zoo, - xts (>= 0.8-9) -Suggests: - Hmisc, - MASS, - tseries, - quadprog, - sn, - robustbase, - quantreg, - gplots, - ff -License: GPL -URL: http://r-forge.r-project.org/projects/returnanalytics/ -Copyright: (c) 2004-2012 -Contributors: Kris Boudt, Diethelm Wuertz, Eric Zivot, Matthieu Lestel -Thanks: A special thanks for additional contributions from - Stefan Albrecht, Khahn Nygyen, Jeff Ryan, - Josh Ulrich, Sankalp Upadhyay, Tobias Verbeke, - H. Felix Wittmann, Ram Ahluwalia -Collate: - 'GLMSmoothIndex.R' - 'chart.Autocorrelation.R' - 'ACStdDev.annualized.R' - 'CalmarRatio.Normalized.R' - 'na.skip.R' - 'Return.GLM.R' - 'table.ComparitiveReturn.GLM.R' - 'table.UnsmoothReturn.R' - 'UnsmoothReturn.R' - 'EmaxDDGBM.R' - 'maxDDGBM.R' - 'table.normDD.R' - 'CDDopt.R' - 'CDrawdown.R' +Package: noniid.sm +Type: Package +Title: Non-i.i.d. GSoC 2013 Shubhankit +Version: 0.1 +Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ +Author: Shubhankit Mohan +Contributors: Peter Carl, Brian G. Peterson +Depends: + xts, + PerformanceAnalytics +Suggests: + PortfolioAnalytics +Maintainer: Brian G. Peterson +Description: GSoC 2013 project to replicate literature on drawdowns and + non-i.i.d assumptions in finance. +License: GPL-3 +ByteCompile: TRUE +Collate: + 'ACStdDev.annualized.R' + 'CalmarRatio.Normalized.R' + 'CDDopt.R' + 'CDrawdown.R' + 'chart.Autocorrelation.R' + 'EmaxDDGBM.R' + 'GLMSmoothIndex.R' + 'maxDDGBM.R' + 'na.skip.R' + 'Return.GLM.R' + 'table.ComparitiveReturn.GLM.R' + 'table.normDD.R' + 'table.UnsmoothReturn.R' + 'UnsmoothReturn.R' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-17 21:20:42 UTC (rev 2809) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-17 21:40:48 UTC (rev 2810) @@ -1,11 +1,12 @@ -export(ACStdDev.annualized) -export(CDrawdown) -export(chart.Autocorrelation) -export(EMaxDDGBM) -export(GLMSmoothIndex) -export(QP.Norm) -export(SterlingRatio.Normalized) -export(table.ComparitiveReturn.GLM) -export(table.EMaxDDGBM) -export(table.NormDD) -export(table.UnsmoothReturn) +export(ACStdDev.annualized) +export(CalmarRatio.Normalized) +export(CDrawdown) +export(chart.Autocorrelation) +export(EMaxDDGBM) +export(GLMSmoothIndex) +export(QP.Norm) +export(SterlingRatio.Normalized) +export(table.ComparitiveReturn.GLM) +export(table.EMaxDDGBM) +export(table.NormDD) +export(table.UnsmoothReturn) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R 2013-08-17 21:20:42 UTC (rev 2809) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R 2013-08-17 21:40:48 UTC (rev 2810) @@ -1,3 +1,5 @@ +#' QP function fo calculation of Sharpe Ratio +#' #' calculate a Normalized Calmar or Sterling reward/risk ratio #' #' Normalized Calmar and Sterling Ratios are yet another method of creating a @@ -45,14 +47,14 @@ #' Normalized.SterlingRatio(managers[,1:6]) #' #' @export -#' @rdname CalmarRatio -#' QP function fo calculation of Sharpe Ratio +#' @rdname CalmarRatio.normalized QP.Norm <- function (R, tau,scale = NA) { Sharpe= as.numeric(SharpeRatio.annualized(edhec)) return(.63519+(.5*log(tau))+log(Sharpe)) } +#' @export CalmarRatio.Normalized <- function (R, tau = 1,scale = NA) { # @author Brian G. Peterson @@ -89,7 +91,7 @@ } #' @export -#' @rdname CalmarRatio +#' @rdname CalmarRatio.normalized SterlingRatio.Normalized <- function (R, tau=1,scale=NA, excess=.1) { # @author Brian G. Peterson Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd 2013-08-17 21:40:48 UTC (rev 2810) @@ -0,0 +1,77 @@ +\name{QP.Norm} +\alias{Normalized.CalmarRatio} +\alias{Normalized.SterlingRatio} +\alias{QP.Norm} +\alias{SterlingRatio.Normalized} +\title{QP function fo calculation of Sharpe Ratio} +\usage{ + QP.Norm(R, tau, scale = NA) + + SterlingRatio.Normalized(R, tau = 1, scale = NA, + excess = 0.1) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{scale}{number of periods in a year (daily scale = + 252, monthly scale = 12, quarterly scale = 4)} + + \item{excess}{for Sterling Ratio, excess amount to add to + the max drawdown, traditionally and default .1 (10\%)} +} +\description{ + calculate a Normalized Calmar or Sterling reward/risk + ratio +} +\details{ + Normalized Calmar and Sterling Ratios are yet another + method of creating a risk-adjusted measure for ranking + investments similar to the \code{\link{SharpeRatio}}. + + Both the Normalized Calmar and the Sterling ratio are the + ratio of annualized return over the absolute value of the + maximum drawdown of an investment. The Sterling ratio + adds an excess risk measure to the maximum drawdown, + traditionally and defaulting to 10\%. + + It is also traditional to use a three year return series + for these calculations, although the functions included + here make no effort to determine the length of your + series. If you want to use a subset of your series, + you'll need to truncate or subset the input data to the + desired length. + + Many other measures have been proposed to do similar + reward to risk ranking. It is the opinion of this author + that newer measures such as Sortino's + \code{\link{UpsidePotentialRatio}} or Favre's modified + \code{\link{SharpeRatio}} are both \dQuote{better} + measures, and should be preferred to the Calmar or + Sterling Ratio. +} +\examples{ +data(managers) + Normalized.CalmarRatio(managers[,1,drop=FALSE]) + Normalized.CalmarRatio(managers[,1:6]) + Normalized.SterlingRatio(managers[,1,drop=FALSE]) + Normalized.SterlingRatio(managers[,1:6]) +} +\author{ + Brian G. Peterson +} +\references{ + Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, + Maximum drawdown. Risk Magazine, 01 Oct 2004. +} +\seealso{ + \code{\link{Return.annualized}}, \cr + \code{\link{maxDrawdown}}, \cr + \code{\link{SharpeRatio.modified}}, \cr + \code{\link{UpsidePotentialRatio}} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EMaxDDGBM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EMaxDDGBM.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EMaxDDGBM.Rd 2013-08-17 21:40:48 UTC (rev 2810) @@ -0,0 +1,23 @@ +\name{EMaxDDGBM} +\alias{EMaxDDGBM} +\title{Expected Drawdown using Brownian Motion Assumptions} +\usage{ + EMaxDDGBM(R, digits = 4) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} +} +\description{ + Works on the model specified by Maddon-Ismail +} +\author{ + R +} +\keyword{Assumptions} +\keyword{Brownian} +\keyword{Drawdown} +\keyword{Expected} +\keyword{Motion} +\keyword{Using} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd 2013-08-17 21:20:42 UTC (rev 2809) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd 2013-08-17 21:40:48 UTC (rev 2810) @@ -1,23 +1,25 @@ -\name{EMaxDDGBM} -\alias{EMaxDDGBM} -\title{Expected Drawdown using Brownian Motion Assumptions} -\usage{ - EMaxDDGBM(R, digits = 4) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} -} -\description{ - Works on the model specified by Maddon-Ismail -} -\author{ - R -} -\keyword{Assumptions} -\keyword{Brownian} -\keyword{Drawdown} -\keyword{Expected} -\keyword{Motion} -\keyword{Using} - +\name{table.EMaxDDGBM} +\alias{table.EMaxDDGBM} +\title{Expected Drawdown using Brownian Motion Assumptions} +\usage{ + table.EMaxDDGBM(R, digits = 4) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} +} +\description{ + Works on the model specified by Maddon-Ismail which + investigates the behavior of this statistic for a + Brownian motion with drift. +} +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan +} +\keyword{Assumptions} +\keyword{Brownian} +\keyword{Drawdown} +\keyword{Expected} +\keyword{Motion} +\keyword{Using} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-17 21:20:42 UTC (rev 2809) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-17 21:40:48 UTC (rev 2810) @@ -1,47 +1,47 @@ -\name{Return.GLM} -\alias{Return.GLM} -\title{GLM Return Model} -\usage{ - Return.GLM(edhec,4) -} -\arguments{ - \item{Ra}{: an xts, vector, matrix, data frame, - timeSeries or zoo object of asset returns} - - \item{q}{: order of autocorrelation coefficient lag - factors} -} -\description{ - True returns represent the flow of information that would - determine the equilibrium value of the fund's securities - in a frictionless market. However, true economic returns - are not observed. The returns to hedge funds and other - alternative investments are often highly serially - correlated.We propose an econometric model of return - smoothingand develop estimators for the smoothing - pro?le as well as a smoothing-adjusted Sharpe ratio. -} -\details{ - To quantify the impact of all of these possible sources - of serial correlation, denote by R(t) the true economic - return of a hedge fund in period 't'; and let R(t) - satisfy the following linear single-factor model: where: - \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + - \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} where - \eqn{\theta}'i is defined as the weighted lag of - autocorrelated lag and whose sum is 1. -} -\author{ - Brian Peterson,Peter Carl, Shubhankit Mohan -} -\references{ - Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An - econometric model of serial correlation and and - illiquidity in hedge fund Returns},Journal of Financial - Economics 74 (2004). -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - +\name{Return.GLM} +\alias{Return.GLM} +\title{GLM Return Model} +\usage{ + Return.GLM(edhec,4) +} +\arguments{ + \item{Ra}{: an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns} + + \item{q}{: order of autocorrelation coefficient lag + factors} +} +\description{ + True returns represent the flow of information that would + determine the equilibrium value of the fund's securities + in a frictionless market. However, true economic returns + are not observed. The returns to hedge funds and other + alternative investments are often highly serially + correlated.We propose an econometric model of return + smoothingand develop estimators for the smoothing pro?le + as well as a smoothing-adjusted Sharpe ratio. +} +\details{ + To quantify the impact of all of these possible sources + of serial correlation, denote by R(t) the true economic + return of a hedge fund in period 't'; and let R(t) + satisfy the following linear single-factor model: where: + \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + + \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} where + \eqn{\theta}'i is defined as the weighted lag of + autocorrelated lag and whose sum is 1. +} +\author{ + Brian Peterson,Peter Carl, Shubhankit Mohan +} +\references{ + Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An + econometric model of serial correlation and and + illiquidity in hedge fund Returns},Journal of Financial + Economics 74 (2004). +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + From noreply at r-forge.r-project.org Sat Aug 17 23:44:47 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 17 Aug 2013 23:44:47 +0200 (CEST) Subject: [Returnanalytics-commits] r2811 - pkg/PortfolioAnalytics/vignettes Message-ID: <20130817214447.244B6185349@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-17 23:44:46 +0200 (Sat, 17 Aug 2013) New Revision: 2811 Modified: pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw pkg/PortfolioAnalytics/vignettes/ROI_vignette.pdf Log: Adding sections to the ROI vignette Modified: pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw 2013-08-17 21:40:48 UTC (rev 2810) +++ pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw 2013-08-17 21:44:46 UTC (rev 2811) @@ -138,4 +138,132 @@ The \code{bt\_maxret} object is a list containing the optimal weights and objective measure at each rebalance period. +\section{Minimizing Portfolio Variance} +The objective to minimize portfolio variance is a quadratic problem of the form: +\begin{equation*} + \begin{aligned} + & \underset{\boldsymbol{w}}{\text{minimize}} + & & \boldsymbol{w}' \boldsymbol{\Sigma} \boldsymbol{w} \\ + \end{aligned} +\end{equation*} + +Where $\boldsymbol{\Sigma}$ is the estimated covariance matrix of asset returns and $\boldsymbol{w}$ is the set of weights. Because this is a quadratic problem, it is well suited to be solved using a quadratic programming solver. For these types of problems, PortfolioAnalytics uses the ROI package with the quadprog plugin. + +\subsection{Global Minimum Variance Portfolio} +\subsubsection{Portfolio Object} +<<>>= +# Create portfolio object +portf_minvar <- portfolio.spec(assets=funds) + +# Add full investment constraint to the portfolio object +portf_minvar <- add.constraint(portfolio=portf_minvar, type="full_investment") + +# Add objective to minimize variance +portf_minvar <- add.objective(portfolio=portf_minvar, type="risk", name="var") +@ + +The only constraint specified is the full investment constraint, therefore the optimization problem is solving for the global minimum variance portfolio. + +\subsubsection{Optimization} +<<>>= +# Run the optimization +opt_gmv <- optimize.portfolio(R=returns, portfolio=portf_minvar, + optimize_method="ROI") +print(opt_gmv) +@ + +\subsubsection{Backtesting} +<<>>= +bt_gmv <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_minvar, optimize_method="ROI", rebalance_on="quarters", training_period=36) +@ + + +\subsection{Constrained Minimum Variance Portfolio} + +\subsubsection{Portfolio Object} +Constraints can be added to the \code{portf\_minvar} portfolio object previously created. +<<>>= +# Add long only constraints +portf_minvar <- add.constraint(portfolio=portf_minvar, type="box", min=0, max=1) + +# Add group constraints +portf_minvar <- add.constraint(portfolio=portf_minvar, + type="group", + groups=c(1, 2, 1), + group_min=c(0, 0.25, 0.10), + group_max=c(0.45, 0.6, 0.5)) +@ + +\subsubsection{Optimization} +<<>>= +# Run the optimization +opt_minvar <- optimize.portfolio(R=returns, portfolio=portf_minvar, optimize_method="ROI") +print(opt_minvar) +@ + +\subsubsection{Backtesting} +<<>>= +bt_minvar <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_minvar, optimize_method="ROI", rebalance_on="quarters", training_period=36) +@ + +\section{Maximizing Quadratic Utility} +The objective to maximize quadratic utility is a quadratic problem of the form: +\begin{equation*} + \begin{aligned} + & \underset{\boldsymbol{w}}{\text{maximize}} + & & \boldsymbol{w}' \boldsymbol{\mu} - \frac{\lambda}{2}\boldsymbol{w}' \boldsymbol{\Sigma} \boldsymbol{w} \\ + \end{aligned} +\end{equation*} + +Where $\mu$ is the estimated mean asset returns, $\lambda$ is the risk aversion parameter, $\boldsymbol{\Sigma}$ is the estimated covariance matrix of asset returns and $\boldsymbol{w}$ is the set of weights. Quadratic utility maximizes return while penalizing variance. The $\lambda$ risk aversion parameter controls how much portfolio variance is penalized. Because this is a quadratic problem, it is well suited to be solved using a quadratic programming solver. For these types of problems, PortfolioAnalytics uses the ROI package with the quadprog plugin. + +\subsection{Portfolio Object} +The portfolio object is specified, and constraints and objectives are created separately. The constraints and objectives are created separately as an alternative example and could also have been added directly to the portfolio object as in the previous sections. +<<>>= +# Create initial portfolio object +init_portf <- portfolio.spec(assets=funds) + +# Create full investment constraint +fi_constr <- weight_sum_constraint(type="full_investment") + +# Create long only constraint +lo_constr <- box_constraint(type="long_only", assets=init_portf$assets) + +# Combine the constraints in a list +qu_constr <- list(fi_constr, lo_constr) + +# Create return objective +ret_obj <- return_objective(name="mean") + +# Create variance objective specifying a risk_aversion parameter which controls +# how much the variance is penalized +var_obj <- portfolio_risk_objective(name="var", risk_aversion=0.25) + +# Combine the objectives into a list +qu_obj <- list(ret_obj, var_obj) +@ + +\subsection{Optimization} +Note how the constraints and objectives are passed to optimize.portfolio. +<<>>= +# Run the optimization +opt_qu <- optimize.portfolio(R=returns, portfolio=init_portf, + constraints=qu_constr, + objectives=qu_obj, + optimize_method="ROI") +@ + +\subsection{Backtesting} +<<>>= +bt_qu <- optimize.portfolio.rebalancing(R=returns, portfolio=init_portf, + constraints=qu_constr, + objectives=qu_obj, + optimize_method="ROI", + rebalance_on="quarters", + training_period=36) +@ + +\section{Minimizing Expected Tail Loss} +TODO + \end{document} \ No newline at end of file Modified: pkg/PortfolioAnalytics/vignettes/ROI_vignette.pdf =================================================================== (Binary files differ) From noreply at r-forge.r-project.org Sun Aug 18 00:18:52 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 00:18:52 +0200 (CEST) Subject: [Returnanalytics-commits] r2812 - pkg/PortfolioAnalytics/vignettes Message-ID: <20130817221852.8830B1858CF@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-18 00:18:51 +0200 (Sun, 18 Aug 2013) New Revision: 2812 Modified: pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw pkg/PortfolioAnalytics/vignettes/portfolio_vignette.pdf Log: revising portfolio_vignette per feedback from Doug Modified: pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw 2013-08-17 21:44:46 UTC (rev 2811) +++ pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw 2013-08-17 22:18:51 UTC (rev 2812) @@ -33,16 +33,17 @@ # Use the first 4 columns in edhec for a returns object returns <- edhec[, 1:4] +colnames(returns) <- c("CA", "CTAG", "DS", "EM") print(head(returns, 5)) # Get a character vector of the fund names fund.names <- colnames(returns) @ -\section{Creating the "portfolio" object} -The portfolio object is instantiated with the \code{portfolio.spec} function. The main argument to \code{portfolio.spec} is assets, this is a required argument. The assets argument can be a scalar value for the number of assets, a character vector of fund names, or a named vector of seed weights. If seed weights are not specified, an equal weight portfolio will be assumed. +\section{Creating the Portfolio Object} +The portfolio object is instantiated with the \code{portfolio.spec} function. The main argument to \code{portfolio.spec} is assets, this is a required argument. The assets argument can be a scalar value for the number of assets, a character vector of fund names, or a named vector of initial weights. If initial weights are not specified, an equal weight portfolio will be assumed. -The \code{pspec} object is an S3 object of class "portfolio". When first created, the portfolio object has an element named \code{assets} with the seed weights, an element named \code{category\_labels}, an element named \code{weight\_seq} with a seed sequence of weights if specified, an empty constraints list and an empty objectives list. +The \code{pspec} object is an S3 object of class "portfolio". When first created, the portfolio object has an element named \code{assets} with the initial weights, an element named \code{category\_labels}, an element named \code{weight\_seq} with sequence of weights if specified, an empty constraints list and an empty objectives list. <<>>= # Specify a portfolio object by passing a character vector for the @@ -115,7 +116,7 @@ group_labels=c("GroupA", "GroupB")) @ -A position limit constraint can be added to limit the number of assets with non-zero, long, or short positions. The ROI solver used for maximizing return and ETL/ES/cVaR objectives support position limit constraints for \code{max\_pos} (i.e. using the glpk plugin). \code{max\_pos} is not supported for the ROI solver using the quadprog plugin. Note that \code{max\_pos\_long} and \code{max\_pos\_short} are not supported for either ROI solver. Position limit constraints are fully supported for DEoptim and random solvers. +A position limit constraint can be added to limit the number of assets with non-zero, long, or short positions. The ROI solver interfaces to the Rglpk package (i.e. using the glpk plugin) for solving maximizing return and ETL/ES/cVaR objectives. The Rglpk package supports integer programming and thus supports position limit constraints for the \code{max\_pos} argument. The quadprog package does not support integer programming, and therefore \code{max\_pos} is not supported for the ROI solver using the quadprog plugin. Note that \code{max\_pos\_long} and \code{max\_pos\_short} are not supported for either ROI solver. All position limit constraints are fully supported for DEoptim and random solvers. <<>>= # Add position limit constraint such that we have a maximum number of three assets with non-zero weights. @@ -125,12 +126,13 @@ # pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos_long=3, max_pos_short=3) @ -A target diversification can be specified as a constraint. Diversification is defined as $diversification = \sum_{i=1}^N w_i$ for $N$ assets. The optimizers work by applying a penalty if the diversification value is more than 5\% away from \code{div\_target}. +A target diversification can be specified as a constraint. Diversification is defined as $diversification = \sum_{i=1}^N w_i^2$ for $N$ assets. The diversification constraint is implemented for the global optimizers by applying a penalty if the diversification value is more than 5\% away from \code{div\_target}. +TODO add support for diversification as a constraint for ROI solvers. Can't do this with Rglpk, but can add as a penalty term for quadratic utility and minimum variance problems <<>>= pspec <- add.constraint(portfolio=pspec, type="diversification", div_target=0.7) @ -A target turnover can be specified as a constraint. The turnover is calculated from a set of initial weights. The initial weights can be specified, by default they are the seed weights in the portfolio object. The optimizers work by applying a penalty if the turnover value is more than 5\% away from \code{turnover\_target}. Note that the turnover constraint is not currently supported for the ROI solvers. +A target turnover can be specified as a constraint. The turnover is calculated from a set of initial weights. The initial weights can be specified, by default they are the initial weights in the portfolio object. The turnover constraint is implemented for the global optimizers by applying a penalty if the turnover value is more than 5\% away from \code{turnover\_target}. Note that the turnover constraint is not currently supported for the ROI solvers. <<>>= pspec <- add.constraint(portfolio=pspec, type="turnover", turnover_target=0.2) @ @@ -142,7 +144,7 @@ This demonstrates adding constraints to the portfolio object. As an alternative to adding constraints directly to the portfolio object, constraints can be specified as separate objects. -\subsection{specifying Constraints as Separate Objects} +\subsection{Specifying Constraints as Separate Objects} The following examples will demonstrate how to specify constraints as separate objects for all constraints types. <<>>= @@ -182,30 +184,10 @@ enabled=TRUE) @ -The portfolio object now has 1 object in the objectives list for the risk objective we just added. -<<>>= -print(pspec$objectives) -@ +TODO Add more objectives -We now have a portfolio object with the following constraints and objectives to pass to \code{optimize.portfolio}. -\begin{itemize} - \item Constraints - \begin{itemize} - \item weight\_sum: The weights sum to 1 (i.e. full investment constraint) - \item box: minimum weight of any asset must be greater than or equal to 0.05 and the maximum weight of any asset must be less than or equal to 0.4. -\end{itemize} - \item Objectives - \begin{itemize} - \item risk objective: minimize portfolio var(iance). -\end{itemize} - -\end{itemize} - \section{Optimization} -Note that this currently does not work, but is how I envision the portfolio object replacing the current constraint object. -<<>>= -#out <- optimize.portfolio(R=returns, portfolio=pspec, optimize_method="ROI") -@ +TODO \end{document} \ No newline at end of file Modified: pkg/PortfolioAnalytics/vignettes/portfolio_vignette.pdf =================================================================== (Binary files differ) From noreply at r-forge.r-project.org Sun Aug 18 07:18:37 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 07:18:37 +0200 (CEST) Subject: [Returnanalytics-commits] r2813 - pkg/PortfolioAnalytics/R Message-ID: <20130818051837.F310B185022@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-18 07:18:36 +0200 (Sun, 18 Aug 2013) New Revision: 2813 Modified: pkg/PortfolioAnalytics/R/generics.R Log: modifying print method for portfolio objects to add description for box constraints and print asset names per Doug's comments Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-17 22:18:51 UTC (rev 2812) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-18 05:18:36 UTC (rev 2813) @@ -60,7 +60,12 @@ # Assets cat("\nAssets\n") nassets <- length(portfolio$assets) - cat("Number of assets:", nassets, "\n") + cat("Number of assets:", nassets, "\n\n") + cat("Asset Names\n") + print(head(names(portfolio$assets), 10)) + if(nassets > 10){ + cat("More than 10 assets, only printing the first 10\n") + } # Constraints cat("\nConstraints\n") @@ -79,15 +84,55 @@ cat("Number of enabled constraints:", n.enabled.constraints, "\n") if(length(enabled.constraints) > 0){ cat("Enabled constraint types\n") - for(type in names.constraints[enabled.constraints]) { - cat("\t\t-", type, "\n") + constraints <- portfolio$constraints + nconstraints <- length(constraints) + for(i in 1:nconstraints){ + if(constraints[[i]]$enabled){ + type <- constraints[[i]]$type + if(type == "box"){ + # long only + if(all(constraints[[i]]$min == 0) & all(constraints[[i]]$max == 1)){ + cat("\t\t-", "box (long only)", "\n") + } else if(all(constraints[[i]]$min == -Inf) & all(constraints[[i]]$max == Inf)){ + # unconstrained + cat("\t\t-", "box (unconstrained)", "\n") + } else if(any(constraints[[i]]$min < 0)){ + # with shorting + cat("\t\t-", "box (with shorting)", "\n") + } else { + cat("\t\t-", type, "\n") + } + } else { + cat("\t\t-", type, "\n") + } + } } } cat("Number of disabled constraints:", nconstraints - n.enabled.constraints, "\n") if((nconstraints - n.enabled.constraints) > 0){ cat("Disabled constraint types\n") - for(type in setdiff(names.constraints, names.constraints[enabled.constraints])) { - cat("\t\t-", type, "\n") + constraints <- portfolio$constraints + nconstraints <- length(constraints) + for(i in 1:nconstraints){ + if(!constraints[[i]]$enabled){ + type <- constraints[[i]]$type + if(type == "box"){ + # long only + if(all(constraints[[i]]$min == 0) & all(constraints[[i]]$max == 1)){ + cat("\t\t-", "box (long only)", "\n") + } else if(all(constraints[[i]]$min == -Inf) & all(constraints[[i]]$max == Inf)){ + # unconstrained + cat("\t\t-", "box (unconstrained)", "\n") + } else if(any(constraints[[i]]$min < 0)){ + # with shorting + cat("\t\t-", "box (with shorting)", "\n") + } else { + cat("\t\t-", type, "\n") + } + } else { + cat("\t\t-", type, "\n") + } + } } } From noreply at r-forge.r-project.org Sun Aug 18 07:36:32 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 07:36:32 +0200 (CEST) Subject: [Returnanalytics-commits] r2814 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130818053632.1EF20184937@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-18 07:36:30 +0200 (Sun, 18 Aug 2013) New Revision: 2814 Added: pkg/PortfolioAnalytics/sandbox/testing_DE_opt_script.R Log: adding testing script for DEoptim with more advanced constraints and objectives Added: pkg/PortfolioAnalytics/sandbox/testing_DE_opt_script.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_DE_opt_script.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_DE_opt_script.R 2013-08-18 05:36:30 UTC (rev 2814) @@ -0,0 +1,181 @@ + +# The following optimization problems will be run +# mean-mETL +# - maximize mean-to-ETL (i.e. reward-to-risk) +# minStdDev +# - minimize volatility +# eqStdDev +# - equal risk (volatility) + +# Include optimizer and multi-core packages +library(PortfolioAnalytics) +require(quantmod) +require(DEoptim) +require(foreach) + +# The multicore package, and therefore registerDoMC, should not be used in a +# GUI environment, because multiple processes then share the same GUI. Only use +# when running from the command line +require(doMC) +registerDoMC(3) + +data(edhec) + +# Drop some indexes and reorder +edhec.R = edhec[,c("Convertible Arbitrage", "Equity Market Neutral","Fixed Income Arbitrage", "Event Driven", "CTA Global", "Global Macro", "Long/Short Equity")] + +# Annualized standard deviation +pasd <- function(R, weights){ + as.numeric(StdDev(R=R, weights=weights)*sqrt(12)) # hardcoded for monthly data + # as.numeric(StdDev(R=R, weights=weights)*sqrt(4)) # hardcoded for quarterly data +} + +# Set some parameters +rebalance_period = 'quarters' # uses endpoints identifiers from xts +clean = "none" #"boudt" +permutations = 4000 + +# Create initial portfolio object used to initialize ALL the bouy portfolios +init.portf <- portfolio.spec(assets=colnames(edhec.R), + weight_seq=generatesequence(by=0.005)) +# Add leverage constraint +init.portf <- add.constraint(portfolio=init.portf, + type="leverage", + min_sum=0.99, + max_sum=1.01) +# Add box constraint +init.portf <- add.constraint(portfolio=init.portf, + type="box", + min=0.05, + max=0.3) + +#Add measure 1, mean return +init.portf <- add.objective(portfolio=init.portf, + type="return", # the kind of objective this is + name="mean", # name of the function + enabled=TRUE, # enable or disable the objective + multiplier=0 # calculate it but don't use it in the objective +) + +# Add measure 2, annualized standard deviation +init.portf <- add.objective(portfolio=init.portf, + type="risk", # the kind of objective this is + name="pasd", # to minimize from the sample + enabled=TRUE, # enable or disable the objective + multiplier=0 # calculate it but don't use it in the objective +) + +# Add measure 3, ES with p=(1-1/12) +# set confidence for ES +p=1-1/12 # for monthly + +init.portf <- add.objective(portfolio=init.portf, + type="risk", # the kind of objective this is + name="ES", # the function to minimize + enabled=FALSE, # enable or disable the objective + multiplier=0, # calculate it but don't use it in the objective + arguments=list(p=p) +) + +# Set up portfolio for Mean-mETL +MeanmETL.portf <- init.portf +MeanmETL.portf$objectives[[1]]$multiplier=-1 # mean +MeanmETL.portf$objectives[[3]]$enabled=TRUE # mETL +MeanmETL.portf$objectives[[3]]$multiplier=1 # mETL + +# Set up portfolio for min pasd +MinSD.portf <- init.portf +MinSD.portf$objectives[[2]]$multiplier=1 + +# Set up portfolio for eqStdDev +EqSD.portf <- add.objective(portfolio=init.portf, + type="risk_budget", + name="StdDev", + min_concentration=TRUE, + arguments = list(p=(1-1/12))) +# Without a sub-objective, we get a somewhat undefined result, since there are (potentially) many Equal SD contribution portfolios. +EqSD.portf$objectives[[2]]$multiplier=1 # min pasd + +# Set up portfolio to maximize mean with mETL risk limit +MeanRL.portf <- add.objective(portfolio=init.portf, + type='risk_budget', + name="ES", + min_prisk=-Inf, + max_prisk=0.4, + arguments=list(method="modified", p=p)) +MeanRL.portf$objectives[[1]]$multiplier=-1 # mean +# Change box constraints max to vector of 1s +MeanRL.portf$constraints[[2]]$max=rep(1, 7) + +# Set the 'R' variable +R <- edhec.R + +start_time<-Sys.time() +print(paste('Starting optimization at',Sys.time())) + +##### mean-mETL ##### +MeanmETL.DE <- optimize.portfolio(R=R, + portfolio=MeanmETL.portf, + optimize_method="DEoptim", + trace=TRUE, + search_size=2000, + traceDE=5) +print(MeanmETL.DE) +print(MeanmETL.DE$elapsed_time) +save(MeanmETL.DE, file=paste('MeanmETL',Sys.Date(),'rda',sep='.')) + +# Evaluate the objectives with DE through time +# MeanmETL.DE.t <- optimize.portfolio.rebalancing(R=R, +# portfolio=MeanSD.portf, +# optimize_method="random", +# trace=TRUE, +# search_size=2000, +# rebalance_on=rebalance_period, +# training_period=36) +# MeanmETL.w = extractWeights.rebal(MeanmETL.DE.t) +# MeanmETL=Return.rebalancing(edhec.R, MeanmETL) +# colnames(MeanmETL) = "MeanmETL" +# save(MeanmETL.DE, MeanmETL.DE.t, MeanmETL.w, MeanmETL, file=paste('MeanmETL',Sys.Date(),'rda',sep='.')) + +print(paste('Completed MeanmETL optimization at',Sys.time(),'moving on to MinSD')) + + +##### min pasd ##### +MinSD.DE <- optimize.portfolio(R=R, + portfolio=MinSD.portf, + optimize_method="DEoptim", + trace=TRUE, + search_size=2000, + traceDE=5) +print(MinSD.DE) +print(MinSD.DE$elapsed_time) +save(MinSD.DE, file=paste('MinSD',Sys.Date(),'rda',sep='.')) + +print(paste('Completed MinSD optimization at',Sys.time(),'moving on to EqSD')) + +##### EqSD ##### +EqSD.DE <- optimize.portfolio(R=R, + portfolio=EqSD.portf, + optimize_method="DEoptim", + trace=TRUE, + search_size=2000, + traceDE=5) +print(EqSD.DE) +print(EqSD.DE$elapsed_time) +save(EqSD.DE, file=paste('EqSD',Sys.Date(),'rda',sep='.')) + +print(paste('Completed EqSD optimization at',Sys.time(),'moving on to MeanRL')) + +MeanRL.DE <- optimize.portfolio(R=R, + portfolio=MeanRL.portf, + optimize_method="DEoptim", + trace=TRUE, + search_size=2000, + traceDE=5) +print(MeanRL.DE) +print(MeanRL.DE$elapsed_time) +save(MeanRL.DE, file=paste('MeanRL',Sys.Date(),'rda',sep='.')) + +end_time<-Sys.time() +print("Optimization Complete") +print(end_time-start_time) From noreply at r-forge.r-project.org Sun Aug 18 11:14:31 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 11:14:31 +0200 (CEST) Subject: [Returnanalytics-commits] r2815 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man Message-ID: <20130818091431.31086185181@r-forge.r-project.org> Author: pulkit Date: 2013-08-18 11:14:30 +0200 (Sun, 18 Aug 2013) New Revision: 2815 Removed: pkg/PerformanceAnalytics/sandbox/pulkit/R/edd.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/ pkg/PerformanceAnalytics/sandbox/pulkit/.Rbuildignore pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd Log: deleted edd.R and conflict resolved Property changes on: pkg/PerformanceAnalytics/sandbox/pulkit ___________________________________________________________________ Added: svn:ignore + .Rproj.user .Rhistory .RData Modified: pkg/PerformanceAnalytics/sandbox/pulkit/.Rbuildignore =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/.Rbuildignore 2013-08-18 05:36:30 UTC (rev 2814) +++ pkg/PerformanceAnalytics/sandbox/pulkit/.Rbuildignore 2013-08-18 09:14:30 UTC (rev 2815) @@ -3,3 +3,5 @@ ChangeLog\.1\.0\.0 week* Week* +^.*\.Rproj$ +^\.Rproj\.user$ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-18 05:36:30 UTC (rev 2814) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-18 09:14:30 UTC (rev 2815) @@ -1,10 +1,10 @@ export(AlphaDrawdown) -export(BenchmarkSR) -export(chart.BenchmarkSR) -export(chart.SRIndifference) -export(EconomicDrawdown) -export(EDDCOPS) -export(MinTrackRecord) -export(REDDCOPS) -export(rollDrawdown) -export(rollEconomicMax) +#export(BenchmarkSR) +#export(chart.BenchmarkSR) +#export(chart.SRIndifference) +#export(EconomicDrawdown) +#export(EDDCOPS) +#export(MinTrackRecord) +#export(REDDCOPS) +#export(rollDrawdown) +#export(rollEconomicMax) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-18 05:36:30 UTC (rev 2814) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-18 09:14:30 UTC (rev 2815) @@ -71,6 +71,7 @@ } } + vs = vs[1] corr = table.Correlation(edhec,edhec) corr_avg = 0 for(i in 1:(columns-1)){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-18 05:36:30 UTC (rev 2814) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-18 09:14:30 UTC (rev 2815) @@ -29,9 +29,7 @@ #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model #'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University #'of Florida,September 2012. -#' #'@examples -#' #'AlphaDrawdown(edhec[,1],edhec[,2]) ## expected value : 0.5141929 #' #'AlphaDrawdown(edhec[,1],edhec[,2],type="max") ## expected value : 0.8983177 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-18 05:36:30 UTC (rev 2814) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-18 09:14:30 UTC (rev 2815) @@ -31,9 +31,8 @@ #'@seealso \code{\link{plot}} #'@keywords ts multivariate distribution models hplot #'@examples -#'ls() +#'chart.Penance(edhec,0.95) #' -#' #'@references Bailey, David H. and Lopez de Prado, Marcos,Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). chart.Penance<-function(R,confidence,type=c("ar","normal"),reference.grid = TRUE,main=NULL,ylab = NULL,xlab = NULL,element.color="darkgrey",lwd = 2,pch = 1,cex = 1,cex.axis=0.8,cex.lab = 1,cex.main = 1,xlim = NULL,ylim = NULL,...){ Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/R/edd.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/edd.R 2013-08-18 05:36:30 UTC (rev 2814) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/edd.R 2013-08-18 09:14:30 UTC (rev 2815) @@ -1,84 +0,0 @@ -#'@title Calculate the Economic Drawdown -#' -#'@description -#'\code{EconomicDrawdown} calculates the Economic Drawdown(EDD) for -#'a return series.To calculate the economic drawdown cumulative -#'return and economic max is calculated for each point. The risk -#'free return(rf) is taken as the input. -#' -#'Economic Drawdown is given by the equation -#' -#'\deqn{EDD(t)=1-\frac{W_t}/{EM(t)}} -#' -#'Here EM stands for Economic Max and is the code \code{\link{EconomicMax}} -#' -#' -#'@param R an xts, vector, matrix, data frame, timeseries, or zoo object of asset return. -#'@param Rf risk free rate can be vector such as government security rate of return -#'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining(FALSE) -#'to aggregate returns, default is TRUE -#'@param \dots any other variable -#'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to -#'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) -#'@examples -#'EconomicDrawdown(edhec,0.08,100) -#' -#' @export -EconomicDrawdown<-function(R,Rf,h, geometric = TRUE,...) -{ - - # DESCRIPTION: - # calculates the Economic Drawdown(EDD) for - # a return series.To calculate the economic drawdown cumulative - # return and rolling economic max is calculated for each point. The risk - # free return(rf) is taken as the input. - - # FUNCTION: - x = checkData(R) - columns = ncol(x) - n = nrow(x) - columnnames = colnames(x) - rf = checkData(Rf) - nr = length(Rf) - if(nr != 1 && nr != n ){ - stop("The number of rows of the returns and the risk free rate do not match") - } - - EDD<-function(xh,geometric){ - if(geometric) - Return.cumulative = cumprod(1+xh) - else Return.cumulative = 1 + cumsum(xh) - l = length(Return.cumulative) - if(nr == 1){ - EM = max(Return.cumulative*(1+rf)^(l-c(1:l))) - } - else{ - rf = rf[index(xh)] - prodRf = cumprod(1+rf) - EM = max(Return.cumulative*as.numeric(last(prodRf)/prodRf)) - } - result = 1 - last(Return.cumulative)/EM - } - - for(column in 1:columns){ - column.drawdown <- na.skip(x[,column], FUN = EDD, geometric = geometric) - if(column == 1) - Economicdrawdown = column.drawdown - else Economicdrawdown = merge(Economicdrawdown, column.drawdown) - } - colnames(Economicdrawdown) = columnnames - Economicdrawdown = reclass(Economicdrawdown, x) - return(Economicdrawdown) -} - -############################################################################### -# R (http://r-project.org/) Econometrics for Performance and Risk Analysis -# -# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson -# -# This R package is distributed under the terms of the GNU Public License (GPL) -# for full details see the file COPYING -# -# $Id: edd.R $ -# -############################################################################## Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd 2013-08-18 05:36:30 UTC (rev 2814) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd 2013-08-18 09:14:30 UTC (rev 2815) @@ -76,7 +76,7 @@ water. } \examples{ -ls() +chart.Penance(edhec,0.95) } \references{ Bailey, David H. and Lopez de Prado, From noreply at r-forge.r-project.org Sun Aug 18 14:19:32 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 14:19:32 +0200 (CEST) Subject: [Returnanalytics-commits] r2816 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130818121932.754A6184200@r-forge.r-project.org> Author: braverock Date: 2013-08-18 14:19:32 +0200 (Sun, 18 Aug 2013) New Revision: 2816 Added: pkg/PortfolioAnalytics/man/extractObjectiveMeasures.Rd Removed: pkg/PortfolioAnalytics/man/add.objective_v1.Rd pkg/PortfolioAnalytics/man/add.objective_v2.Rd pkg/PortfolioAnalytics/man/constrained_objective_v1.Rd pkg/PortfolioAnalytics/man/constrained_objective_v2.Rd pkg/PortfolioAnalytics/man/constraint_fnMap.Rd pkg/PortfolioAnalytics/man/constraint_fn_map.Rd pkg/PortfolioAnalytics/man/constraint_v1.Rd pkg/PortfolioAnalytics/man/constraint_v2.Rd pkg/PortfolioAnalytics/man/extractWeights.rebal.Rd pkg/PortfolioAnalytics/man/get.constraints.Rd pkg/PortfolioAnalytics/man/optimize.portfolio.rebalancing_v1.Rd pkg/PortfolioAnalytics/man/optimize.portfolio_v1.Rd pkg/PortfolioAnalytics/man/optimize.portfolio_v2.Rd pkg/PortfolioAnalytics/man/random_portfolios_v2.Rd pkg/PortfolioAnalytics/man/randomize_portfolio_v2.Rd pkg/PortfolioAnalytics/man/set.portfolio.moments_v2.Rd pkg/PortfolioAnalytics/man/volatility_constraint.Rd Modified: pkg/PortfolioAnalytics/DESCRIPTION pkg/PortfolioAnalytics/R/constrained_objective.R pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/R/objective.R pkg/PortfolioAnalytics/R/objectiveFUN.R pkg/PortfolioAnalytics/R/optimize.portfolio.R pkg/PortfolioAnalytics/R/portfolio.R pkg/PortfolioAnalytics/man/add.constraint.Rd pkg/PortfolioAnalytics/man/add.objective.Rd pkg/PortfolioAnalytics/man/constrained_objective.Rd pkg/PortfolioAnalytics/man/constraint.Rd pkg/PortfolioAnalytics/man/is.objective.Rd pkg/PortfolioAnalytics/man/minmax_objective.Rd pkg/PortfolioAnalytics/man/objective.Rd pkg/PortfolioAnalytics/man/optimize.portfolio.Rd pkg/PortfolioAnalytics/man/optimize.portfolio.rebalancing.Rd pkg/PortfolioAnalytics/man/portfolio.spec.Rd Log: - update roxygen comments to remove a lot of duplication in the docs - add cross-references - standardize on non-versioned function names Modified: pkg/PortfolioAnalytics/DESCRIPTION =================================================================== --- pkg/PortfolioAnalytics/DESCRIPTION 2013-08-18 09:14:30 UTC (rev 2815) +++ pkg/PortfolioAnalytics/DESCRIPTION 2013-08-18 12:19:32 UTC (rev 2816) @@ -2,7 +2,7 @@ Type: Package Title: Portfolio Analysis, including Numeric Methods for Optimization of Portfolios -Version: 0.8.2 +Version: 0.8.3 Date: $Date$ Author: Kris Boudt, Peter Carl, Brian G. Peterson Contributors: Hezky Varon, Guy Yollin Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-18 09:14:30 UTC (rev 2815) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-18 12:19:32 UTC (rev 2816) @@ -15,52 +15,8 @@ # TODO add more details about the nuances of the optimization engines -#' function to calculate a numeric return value for a portfolio based on a set of constraints -#' -#' function to calculate a numeric return value for a portfolio based on a set of constraints, -#' we'll try to make as few assumptions as possible, and only run objectives that are required by the user -#' -#' If the user has passed in either min_sum or max_sum constraints for the portfolio, or both, -#' and are using a numerical optimization method like DEoptim, and normalize=TRUE, the default, -#' we'll normalize the weights passed in to whichever boundary condition has been violated. -#' If using random portfolios, all the portfolios generated will meet the constraints by construction. -#' NOTE: this means that the weights produced by a numeric optimization algorithm like DEoptim -#' might violate your constraints, so you'd need to renormalize them after optimizing -#' We apply the same normalization in \code{\link{optimize.portfolio}} so that the weights you see have been -#' normalized to min_sum if the generated portfolio is smaller than min_sum or max_sum if the -#' generated portfolio is larger than max_sum. -#' This normalization increases the speed of optimization and convergence by several orders of magnitude in many cases. -#' -#' You may find that for some portfolios, normalization is not desirable, if the algorithm -#' cannot find a direction in which to move to head towards an optimal portfolio. In these cases, -#' it may be best to set normalize=FALSE, and penalize the portfolios if the sum of the weighting -#' vector lies outside the min_sum and/or max_sum. -#' -#' Whether or not we normalize the weights using min_sum and max_sum, and are using a numerical optimization -#' engine like DEoptim, we will penalize portfolios that violate weight constraints in much the same way -#' we penalize other constraints. If a min_sum/max_sum normalization has not occurred, convergence -#' can take a very long time. We currently do not allow for a non-normalized full investment constraint. -#' Future version of this function could include this additional constraint penalty. -#' -#' When you are optimizing a return objective, you must specify a negative multiplier -#' for the return objective so that the function will maximize return. If you specify a target return, -#' any return less than your target will be penalized. If you do not specify a target return, -#' you may need to specify a negative VTR (value to reach) , or the function will not converge. -#' Try the maximum expected return times the multiplier (e.g. -1 or -10). -#' Adding a return objective defaults the multiplier to -1. -#' -#' Additional parameters for random portfolios or \code{\link[DEoptim]{DEoptim.control}} may be passed in via \dots -#' -#' -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns -#' @param w a vector of weights to test -#' @param constraints an object of type "constraints" specifying the constraints for the optimization, see \code{\link{constraint}} -#' @param \dots any other passthru parameters -#' @param trace TRUE/FALSE whether to include debugging and additional detail in the output list -#' @param normalize TRUE/FALSE whether to normalize results to min/max sum (TRUE), or let the optimizer penalize portfolios that do not conform (FALSE) -#' @param storage TRUE/FALSE default TRUE for DEoptim with trace, otherwise FALSE. not typically user-called -#' @seealso \code{\link{constraint}}, \code{\link{objective}}, \code{\link[DEoptim]{DEoptim.control}} -#' @author Kris Boudt, Peter Carl, Brian G. Peterson +#' @rdname constrained_objective +#' @name constrained_objective #' @export constrained_objective_v1 <- function(w, R, constraints, ..., trace=FALSE, normalize=TRUE, storage=FALSE) { @@ -336,7 +292,7 @@ } } -#' constrained_objective_v2 function to calculate a numeric return value for a portfolio based on a set of constraints and objectives +#' calculate a numeric return value for a portfolio based on a set of constraints and objectives #' #' function to calculate a numeric return value for a portfolio based on a set of constraints, #' we'll try to make as few assumptions as possible, and only run objectives that are required by the user @@ -370,7 +326,10 @@ #' Try the maximum expected return times the multiplier (e.g. -1 or -10). #' Adding a return objective defaults the multiplier to -1. #' -#' Additional parameters for random portfolios or \code{\link[DEoptim]{DEoptim.control}} may be passed in via \dots +#' Additional parameters for other solvers +#' (e.g. random portfolios or +#' \code{\link[DEoptim]{DEoptim.control}} or pso or GenSA +#' may be passed in via \dots #' #' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns @@ -744,5 +703,7 @@ } # Alias constrained_objective_v2 to constrained_objective +#' @rdname constrained_objective +#' @name constrained_objective #' @export constrained_objective <- constrained_objective_v2 \ No newline at end of file Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-18 09:14:30 UTC (rev 2815) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-18 12:19:32 UTC (rev 2816) @@ -12,6 +12,11 @@ #' constructor for class constraint #' +#' This function is the constructior for the constraint object stored in the +#' \code{\link{portfolio.spec}} object. +#' +#' See main documentation in \code{\link{add.constraint}} +#' #' @param assets number of assets, or optionally a named vector of assets specifying seed weights #' @param ... any other passthru parameters #' @param min numeric or named vector specifying minimum weight box constraints @@ -21,7 +26,13 @@ #' @param min_sum minimum sum of all asset weights, default .99 #' @param max_sum maximum sum of all asset weights, default 1.01 #' @param weight_seq seed sequence of weights, see \code{\link{generatesequence}} -#' @author Peter Carl and Brian G. Peterson +#' @param type character type of the constraint to add or update +#' @param assets number of assets, or optionally a named vector of assets specifying seed weights +#' @param ... any other passthru parameters +#' @param constrclass character to name the constraint class +#' @author Peter Carl, Brian G. Peterson, Ross Bennett +#' @aliases contraint_v1, constraint_v2 +#' @seealso \code{\link{add.constraint}} #' @examples #' exconstr <- constraint(assets=10, min_sum=1, max_sum=1, min=.01, max=.35, weight_seq=generatesequence()) #' @export @@ -152,15 +163,7 @@ } -#' constructor for class v2_constraint -#' -#' This function is called by the constructor for the specific constraint. -#' -#' @param type character type of the constraint to add or update -#' @param assets number of assets, or optionally a named vector of assets specifying seed weights -#' @param ... any other passthru parameters -#' @param constrclass character to name the constraint class -#' @author Ross Bennett +#' @rdname constraint #' @export constraint_v2 <- function(type, enabled=TRUE, ..., constrclass="v2_constraint"){ if(!hasArg(type)) stop("you must specify a constraint type") @@ -181,21 +184,21 @@ #' General interface for adding and/or updating optimization constraints. #' -#' This is the main function for adding and/or updating constraints to the \code{{portfolio}} object. +#' This is the main function for adding and/or updating constraints to the \code{\link{portfolio.spec}} object. #' -#' The following constraint types are supported: +#' The following constraint types may be specified: #' \itemize{ -#' \item{\code{weight_sum}, \code{weight}, \code{leverage}}{ Specify constraint on the sum of the weights, see \code{\link{weight_sum_constraint}}} -#' \item{\code{full_investment}}{ Special case to set \code{min_sum=1} and \code{max_sum=1} of weight sum constraints} -#' \item{\code{dollar_neutral}, \code{active}}{ Special case to set \code{min_sum=0} and \code{max_sum=0} of weight sum constraints} -#' \item{\code{box}}{ Specify constraints for the individual asset weights, see \code{\link{box_constraint}}} -#' \item{\code{long_only}}{ Special case to set \code{min=0} and \code{max=1} of box constraints} -#' \item{\code{group}}{ Specify a constraint on the sum of weights within groups and the number of assets with non-zero weights in groups, see \code{\link{group_constraint}}} -#' \item{\code{turnover}}{ Specify a constraint for target turnover. Turnover is calculated from a set of initial weights, see \code{\link{turnover_constraint}}} -#' \item{\code{diversification}}{ Specify a constraint for target diversification of a set of weights, see \code{\link{diversification_constraint}}} -#' \item{\code{position_limit}}{ Specify a constraint on the number of positions (i.e. assets with non-zero weights as well as the number of long and short positions, see \code{\link{position_limit_constraint}}} -#' \item{\code{return}}{ Specify a constraint for target mean return, see \code{\link{return_constraint}}} -#' \item{\code{factor_exposure}}{ Specify a constraint for risk factor exposures, see \code{\link{factor_exposure_constraint}}} +#' \item{\code{weight_sum}, \code{weight}, \code{leverage}}{ Specify constraint on the sum of the weights, see \code{\link{weight_sum_constraint}} } +#' \item{\code{full_investment}}{ Special case to set \code{min_sum=1} and \code{max_sum=1} of weight sum constraints } +#' \item{\code{dollar_neutral}, \code{active}}{ Special case to set \code{min_sum=0} and \code{max_sum=0} of weight sum constraints } +#' \item{\code{box}}{ box constraints for the individual asset weights, see \code{\link{box_constraint}} } +#' \item{\code{long_only}}{ Special case to set \code{min=0} and \code{max=1} of box constraints } +#' \item{\code{group}}{ specify the sum of weights within groups and the number of assets with non-zero weights in groups, see \code{\link{group_constraint}} } +#' \item{\code{turnover}}{ Specify a constraint for target turnover. Turnover is calculated from a set of initial weights, see \code{\link{turnover_constraint}} } +#' \item{\code{diversification}}{ target diversification of a set of weights, see \code{\link{diversification_constraint}} } +#' \item{\code{position_limit}}{ Specify the number of non-zero positions, see \code{\link{position_limit_constraint}} } +#' \item{\code{return}}{ Specify the target mean return, see \code{\link{return_constraint}}} +#' \item{\code{factor_exposure}}{ Specify risk factor exposures, see \code{\link{factor_exposure_constraint}}} #' } #' #' @param portfolio an object of class 'portfolio' to add the constraint to, specifying the constraints for the optimization, see \code{\link{portfolio.spec}} @@ -205,7 +208,16 @@ #' @param \dots any other passthru parameters to specify constraints #' @param indexnum if you are updating a specific constraint, the index number in the $constraints list to update #' @author Ross Bennett -#' @seealso \code{\link{weight_sum_constraint}}, \code{\link{box_constraint}}, \code{\link{group_constraint}}, \code{\link{turnover_constraint}}, \code{\link{diversification_constraint}}, \code{\link{position_limit_constraint}, \code{\link{return_constraint}, \code{\link{factor_exposure_constraint}} +#' @seealso +#' \code{\link{portfolio.spec}} +#' \code{\link{weight_sum_constraint}}, +#' \code{\link{box_constraint}}, +#' \code{\link{group_constraint}}, +#' \code{\link{turnover_constraint}}, +#' \code{\link{diversification_constraint}}, +#' \code{\link{position_limit_constraint}}, +#' \code{\link{return_constraint}}, +#' \code{\link{factor_exposure_constraint}} #' @examples #' data(edhec) #' returns <- edhec[, 1:4] Modified: pkg/PortfolioAnalytics/R/objective.R =================================================================== --- pkg/PortfolioAnalytics/R/objective.R 2013-08-18 09:14:30 UTC (rev 2815) +++ pkg/PortfolioAnalytics/R/objective.R 2013-08-18 12:19:32 UTC (rev 2816) @@ -12,6 +12,9 @@ #' constructor for class 'objective' #' +#' Typically called as a sub-function by the user function \code{\link{add.objective}}. +#' See main documentation there. +#' #' @param name name of the objective which will be used to call a function, like 'ES', 'VaR', 'mean' #' @param target univariate target for the objective, default NULL #' @param arguments default arguments to be passed to an objective function when executed @@ -19,6 +22,8 @@ #' @param \dots any other passthrough parameters #' @param multiplier multiplier to apply to the objective, usually 1 or -1 #' @param objclass string class to apply, default 'objective' +#' @param x an object potentially of type 'objective' to test +#' @seealso \code{\link{add.objective}}, \code{\link{portfolio.spec}} #' @author Brian G. Peterson #' @export objective<-function(name , target=NULL , arguments, enabled=TRUE , ..., multiplier=1, objclass='objective'){ @@ -43,32 +48,14 @@ #' check class of an objective object -#' @param x an object potentially of type 'objective' to test #' @author Brian G. Peterson #' @export is.objective <- function( x ) { inherits( x, "objective" ) } -#' General interface for adding optimization objectives, including risk, return, and risk budget -#' -#' This function is the main function for adding and updating business objectives in an object of type \code{\link{constraint}}. -#' -#' In general, you will define your objective as one of three types: 'return', 'risk', or 'risk_budget'. -#' These have special handling and intelligent defaults for dealing with the function most likely to be -#' used as objectives, including mean, median, VaR, ES, etc. -#' -#' @param constraints an object of type "constraints" to add the objective to, specifying the constraints for the optimization, see \code{\link{constraint}} -#' @param type character type of the objective to add or update, currently 'return','risk', or 'risk_budget' -#' @param name name of the objective, should correspond to a function, though we will try to make allowances -#' @param arguments default arguments to be passed to an objective function when executed -#' @param enabled TRUE/FALSE -#' @param \dots any other passthru parameters -#' @param indexnum if you are updating a specific constraint, the index number in the $objectives list to update -#' @author Brian G. Peterson -#' -#' @seealso \code{\link{constraint}} -#' +#' @rdname add.objective +#' @name add.objective #' @export add.objective_v1 <- function(constraints, type, name, arguments=NULL, enabled=TRUE, ..., indexnum=NULL) { @@ -131,26 +118,8 @@ return(constraints) } -#' General interface for adding optimization objectives, including risk, return, and risk budget -#' -#' This function is the main function for adding and updating business objectives in an object of type \code{\link{portfolio}}. -#' -#' In general, you will define your objective as one of three types: 'return', 'risk', or 'risk_budget'. -#' These have special handling and intelligent defaults for dealing with the function most likely to be -#' used as objectives, including mean, median, VaR, ES, etc. -#' -#' @param portfolio an object of type 'portfolio' to add the objective to, specifying the portfolio for the optimization, see \code{\link{portfolio}} -#' @param type character type of the objective to add or update, currently 'return','risk', or 'risk_budget' -#' @param name name of the objective, should correspond to a function, though we will try to make allowances -#' @param arguments default arguments to be passed to an objective function when executed -#' @param enabled TRUE/FALSE -#' @param \dots any other passthru parameters -#' @param indexnum if you are updating a specific constraint, the index number in the $objectives list to update -#' @author Brian G. Peterson and Ross Bennett -#' @aliases add.objective #' @rdname add.objective -#' @seealso \code{\link{objective}} -#' +#' @name add.objective #' @export add.objective_v2 <- function(portfolio, type, name, arguments=NULL, enabled=TRUE, ..., indexnum=NULL){ # This function is based on the original add.objective function, but modified @@ -216,7 +185,30 @@ return(portfolio) } -# Alias add.objective_v2 to add.objective +#' General interface for adding optimization objectives, including risk, return, and risk budget +#' +#' This function is the main function for adding and updating business objectives in an object of type \code{\link{portfolio.spec}}. +#' +#' In general, you will define your objective as one of three types: 'return', 'risk', or 'risk_budget'. +#' These have special handling and intelligent defaults for dealing with the function most likely to be +#' used as objectives, including mean, median, VaR, ES, etc. +#' +#' Objectives of type 'turnove' and 'minmax' are also supported. +#' +#' @param portfolio an object of type 'portfolio' to add the objective to, specifying the portfolio for the optimization, see \code{\link{portfolio}} +#' @param type character type of the objective to add or update, currently 'return','risk', or 'risk_budget' +#' @param name name of the objective, should correspond to a function, though we will try to make allowances +#' @param arguments default arguments to be passed to an objective function when executed +#' @param enabled TRUE/FALSE +#' @param \dots any other passthru parameters +#' @param indexnum if you are updating a specific constraint, the index number in the $objectives list to update +#' @param constraints an object of type "constraints" to add the objective to, specifying the constraints for the optimization, see \code{\link{constraint}} (for _v1 objectives only) +#' @author Brian G. Peterson and Ross Bennett +#' @aliases +#' add.objective_v2, add.objective_v1 +#' @seealso \code{\link{objective}}, \code{\link{portfolio.spec}} +#' @rdname add.objective +#' @name add.objective #' @export add.objective <- add.objective_v2 @@ -352,7 +344,7 @@ #' constructor for class tmp_minmax_objective #' -#' I am add this as a temporary objective allowing for a min and max to be specified. Testing +#' I am adding this as a temporary objective allowing for a min and max to be specified. Testing #' to understand how the objective function responds to a range of allowable values. I will #' likely add this to the turnover, diversification, and volatility constraints #' allowing the user to specify a range of values. @@ -409,4 +401,4 @@ portfolio$objectives <- objectives return(portfolio) -} \ No newline at end of file +} Modified: pkg/PortfolioAnalytics/R/objectiveFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/objectiveFUN.R 2013-08-18 09:14:30 UTC (rev 2815) +++ pkg/PortfolioAnalytics/R/objectiveFUN.R 2013-08-18 12:19:32 UTC (rev 2816) @@ -4,6 +4,7 @@ #' @param weights vector of weights from optimization #' @param wts.init vector of initial weights used to calculate turnover from #' @author Ross Bennett +#' @export turnover <- function(weights, wts.init=NULL) { # turnover function from https://r-forge.r-project.org/scm/viewvc.php/pkg/PortfolioAnalytics/sandbox/script.workshop2012.R?view=markup&root=returnanalytics Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-18 09:14:30 UTC (rev 2815) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-18 12:19:32 UTC (rev 2816) @@ -10,61 +10,9 @@ # ############################################################################### -#' wrapper for constrained optimization of portfolios -#' -#' This function aims to provide a wrapper for constrained optimization of -#' portfolios that allows the user to specify box constraints and business -#' objectives. -#' -#' This function currently supports DEoptim and random portfolios as back ends. -#' Additional back end contributions for Rmetrics, ghyp, etc. would be welcome. -#' -#' When using random portfolios, search_size is precisely that, how many -#' portfolios to test. You need to make sure to set your feasible weights -#' in generatesequence to make sure you have search_size unique -#' portfolios to test, typically by manipulating the 'by' parameter -#' to select something smaller than .01 -#' (I often use .002, as .001 seems like overkill) -#' -#' When using DE, search_size is decomposed into two other parameters -#' which it interacts with, NP and itermax. -#' -#' NP, the number of members in each population, is set to cap at 2000 in -#' DEoptim, and by default is the number of parameters (assets/weights) *10. -#' -#' itermax, if not passed in dots, defaults to the number of parameters (assets/weights) *50. -#' -#' When using GenSA and want to set \code{verbose=TRUE}, instead use \code{trace}. -#' -#' The extension to ROI solves a limit type of convex optimization problems: -#' 1) Maxmimize portfolio return subject box constraints on weights -#' 2) Minimize portfolio variance subject to box constraints (otherwise known as global minimum variance portfolio) -#' 3) Minimize portfolio variance subject to box constraints and a desired portfolio return -#' 4) Maximize quadratic utility subject to box constraints and risk aversion parameter (this is passed into \code{optimize.portfolio} as as added argument to the \code{constraints} object) -#' 5) Mean CVaR optiimization subject to box constraints and target portfolio return -#' Lastly, because these convex optimization problem are standardized, there is no need for a penalty term. -#' Therefore, the \code{multiplier} argument in \code{\link{add.objective}} passed into the complete constraint object are ingnored by the solver. -#' ROI also can solve quadratic and linear problems with group constraints by added a \code{groups} argument into the constraints object. -#' This argument is a vector with each of its elements the number of assets per group. -#' The group constraints, \code{cLO} and \code{cUP}, are also added to the constraints object. -#' -#' For example, if you have 9 assets, and would like to require that the the first 3 assets are in one group, the second 3 are in another, and the third are in another, then you add the grouping by \code{constraints$groups <- c(3,3,3)}. -#' To apply the constraints that the first group must compose of at least 20% of the weight, the second group 15%, and the third group 10%, and that now signle group should compose of more that 50% of the weight, then you would add the lower group constraint as \code{constraints$cLO <- c(0.20, 0.15, 0.10)} and the upper constraints as \code{constraints$cUP <- rep(0.5,3)}. -#' These group constraint can be set for all five optimization problems listed above. -#' -#' If you would like to interface with \code{optimize.portfolio} using matrix formulations, then use \code{ROI_old}. -#' -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns -#' @param constraints an object of type "constraints" specifying the constraints for the optimization, see \code{\link{constraint}}, if using closed for solver, need to pass a \code{\link{constraint_ROI}} object. -#' @param optimize_method one of "DEoptim", "random", "ROI","ROI_old", "pso", "GenSA". For using \code{ROI_old}, need to use a constraint_ROI object in constraints. For using \code{ROI}, pass standard \code{constratint} object in \code{constraints} argument. Presently, ROI has plugins for \code{quadprog} and \code{Rglpk}. -#' @param search_size integer, how many portfolios to test, default 20,000 -#' @param trace TRUE/FALSE if TRUE will attempt to return additional information on the path or portfolios searched -#' @param \dots any other passthru parameters -#' @param rp matrix of random portfolio weights, default NULL, mostly for automated use by rebalancing optimization or repeated tests on same portfolios -#' @param momentFUN the name of a function to call to set portfolio moments, default \code{\link{set.portfolio.moments}} -#' -#' @return a list containing the optimal weights, some summary statistics, the function call, and optionally trace information -#' @author Kris Boudt, Peter Carl, Brian G. Peterson + +#' @rdname optimize.portfolio +#' @name optimize.portfolio #' @export optimize.portfolio_v1 <- function( R, @@ -478,68 +426,6 @@ } ##### version 2 of optimize.portfolio ##### -#' version 2 wrapper for constrained optimization of portfolios -#' -#' This function aims to provide a wrapper for constrained optimization of -#' portfolios that allows the user to specify constraints and business -#' objectives. -#' -#' This function currently supports DEoptim, random portfolios, ROI, pso, and GenSA as back ends. -#' Additional back end contributions for Rmetrics, ghyp, etc. would be welcome. -#' -#' When using random portfolios, search_size is precisely that, how many -#' portfolios to test. You need to make sure to set your feasible weights -#' in generatesequence to make sure you have search_size unique -#' portfolios to test, typically by manipulating the 'by' parameter -#' to select something smaller than .01 -#' (I often use .002, as .001 seems like overkill) -#' -#' When using DE, search_size is decomposed into two other parameters -#' which it interacts with, NP and itermax. -#' -#' NP, the number of members in each population, is set to cap at 2000 in -#' DEoptim, and by default is the number of parameters (assets/weights) *10. -#' -#' itermax, if not passed in dots, defaults to the number of parameters (assets/weights) *50. -#' -#' When using GenSA and want to set \code{verbose=TRUE}, instead use \code{trace}. -#' -#' The extension to ROI solves a limited type of convex optimization problems: -#' \itemize{ -#' \item{Maxmimize portfolio return subject leverage, box, group, position limit, target mean return, and/or factor exposure constraints on weights} -#' \item{Minimize portfolio variance subject to leverage, box, group, and/or factor exposure constraints (otherwise known as global minimum variance portfolio)} -#' \item{Minimize portfolio variance subject to leverage, box, group, and/or factor exposure constraints and a desired portfolio return} -#' \item{Maximize quadratic utility subject to leverage, box, group, target mean return, and/or factor exposure constraints and risk aversion parameter. -#' (The risk aversion parameter is passed into \code{optimize.portfolio} as an added argument to the \code{portfolio} object)} -#' \item{Mean CVaR optimization subject to leverage, box, group, position limit, target mean return, and/or factor exposure constraints and target portfolio return} -#' } -#' Lastly, because these convex optimization problem are standardized, there is no need for a penalty term. -#' Therefore, the \code{multiplier} argument in \code{\link{add.objective}} passed into the complete constraint object are ingnored by the solver. -#' -#' If you would like to interface with \code{optimize.portfolio} using matrix formulations, then use \code{ROI_old}. -#' -#' An object of class \code{v1_constraint} can be passed in for the \code{constraints} argument. -#' The \code{v1_constraint} object was used in the previous 'v1' specification to specify the -#' constraints and objectives for the optimization problem, see \code{\link{constraint}}. -#' We will attempt to detect if the object passed into the constraints argument -#' is a \code{v1_constraint} object and update to the 'v2' specification by adding the -#' constraints and objectives to the \code{portfolio} object. -#' -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns -#' @param portfolio an object of type "portfolio" specifying the constraints and objectives for the optimization -#' @param constraints default=NULL, a list of constraint objects. An object of class ]v1_constraint' can be passed in here. -#' @param objectives default=NULL, a list of objective objects -#' @param optimize_method one of "DEoptim", "random", "ROI","ROI_old", "pso", "GenSA". For using \code{ROI_old}, need to use a constraint_ROI object in constraints. For using \code{ROI}, pass standard \code{constratint} object in \code{constraints} argument. Presently, ROI has plugins for \code{quadprog} and \code{Rglpk}. -#' @param search_size integer, how many portfolios to test, default 20,000 -#' @param trace TRUE/FALSE if TRUE will attempt to return additional information on the path or portfolios searched -#' @param \dots any other passthru parameters -#' @param rp matrix of random portfolio weights, default NULL, mostly for automated use by rebalancing optimization or repeated tests on same portfolios -#' @param momentFUN the name of a function to call to set portfolio moments, default \code{\link{set.portfolio.moments_v2}} -#' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. -#' -#' @return a list containing the optimal weights, some summary statistics, the function call, and optionally trace information -#' @author Kris Boudt, Peter Carl, Brian G. Peterson -#' @aliases optimize.portfolio #' @rdname optimize.portfolio #' @export optimize.portfolio_v2 <- function( @@ -942,31 +828,101 @@ return(out) } -# Alias for optimize.portfolio_ -#' @export -optimize.portfolio <- optimize.portfolio_v2 - -#' version 1 portfolio optimization with support for rebalancing or rolling periods +#' constrained optimization of portfolios #' -#' This function may eventually be wrapped into optimize.portfolio +#' This function aims to provide a wrapper for constrained optimization of +#' portfolios that allows the user to specify box constraints and business +#' objectives. +#' It will be the objective function\code{FUN} passed to any supported \R +#' optimization solver. #' -#' For now, we'll set the rebalancing periods here, though I think they should eventually be part of the constraints object +#' @details +#' This function currently supports DEoptim and random portfolios as back ends. +#' Additional back end contributions for Rmetrics, ghyp, etc. would be welcome. +#' +#' When using random portfolios, search_size is precisely that, how many +#' portfolios to test. You need to make sure to set your feasible weights +#' in generatesequence to make sure you have search_size unique +#' portfolios to test, typically by manipulating the 'by' parameter +#' to select something smaller than .01 +#' (I often use .002, as .001 seems like overkill) #' -#' This function is massively parallel, and will require 'foreach' and we suggest that you register a parallel backend. +#' When using DE, search_size is decomposed into two other parameters +#' which it interacts with, NP and itermax. #' +#' NP, the number of members in each population, is set to cap at 2000 in +#' DEoptim, and by default is the number of parameters (assets/weights) *10. +#' +#' itermax, if not passed in dots, defaults to the number of parameters (assets/weights) *50. +#' +#' When using GenSA and want to set \code{verbose=TRUE}, instead use \code{trace}. +#' +#' The extension to ROI solves a limited type of convex optimization problems: +#' \itemize{ +#' \item{Maxmimize portfolio return subject leverage, box, group, position limit, target mean return, and/or factor exposure constraints on weights} +#' \item{Minimize portfolio variance subject to leverage, box, group, and/or factor exposure constraints (otherwise known as global minimum variance portfolio)} +#' \item{Minimize portfolio variance subject to leverage, box, group, and/or factor exposure constraints and a desired portfolio return} +#' \item{Maximize quadratic utility subject to leverage, box, group, target mean return, and/or factor exposure constraints and risk aversion parameter. +#' (The risk aversion parameter is passed into \code{optimize.portfolio} as an added argument to the \code{portfolio} object)} +#' \item{Mean CVaR optimization subject to leverage, box, group, position limit, target mean return, and/or factor exposure constraints and target portfolio return} +#' } [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2816 From noreply at r-forge.r-project.org Sun Aug 18 14:39:18 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 14:39:18 +0200 (CEST) Subject: [Returnanalytics-commits] r2817 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man Message-ID: <20130818123918.BC06C1842B9@r-forge.r-project.org> Author: pulkit Date: 2013-08-18 14:39:18 +0200 (Sun, 18 Aug 2013) New Revision: 2817 Removed: pkg/PerformanceAnalytics/sandbox/pulkit/man/DrawdownGPD.Rd Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R Log: some errors and documentation modifications Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-18 12:39:18 UTC (rev 2817) @@ -27,7 +27,6 @@ 'DrawdownBetaMulti.R' 'DrawdownBeta.R' 'EDDCOPS.R' - 'edd.R' 'Edd.R' 'ExtremeDrawdown.R' 'GoldenSection.R' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-18 12:39:18 UTC (rev 2817) @@ -1,10 +1,21 @@ export(AlphaDrawdown) -#export(BenchmarkSR) -#export(chart.BenchmarkSR) -#export(chart.SRIndifference) -#export(EconomicDrawdown) -#export(EDDCOPS) -#export(MinTrackRecord) -#export(REDDCOPS) -#export(rollDrawdown) -#export(rollEconomicMax) +export(BenchmarkSR) +export(chart.BenchmarkSR) +export(chart.SRIndifference) +export(EconomicDrawdown) +export(EDDCOPS) +export(MinTrackRecord) +export(REDDCOPS) +export(rollDrawdown) +export(rollEconomicMax) +export(CDaR) +export(CdarMultiPath) +export(chart.Penance) +export(chart.REDD) +export(chart.SharpeEfficientFrontier) +export(BetaDrawdown) +export(MultiBetaDrawdown) +export(EDDCOPS) +export(DrawdownGPD) +export(golden_section) +export(MaxDD) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-18 12:39:18 UTC (rev 2817) @@ -41,6 +41,7 @@ #' #'chart.BenchmarkSR(edhec,vs="strategies") #'chart.BenchmarkSR(edhec,vs="sharpe") +#' #'@export chart.BenchmarkSR<-function(R=NULL,main=NULL,ylab = NULL,xlab = NULL,element.color="darkgrey",lwd = 2,pch = 1,cex = 1,cex.axis=0.8,cex.lab = 1,cex.main = 1,vs=c("sharpe","correlation","strategies"),xlim = NULL,ylim = NULL,...){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-18 12:39:18 UTC (rev 2817) @@ -31,8 +31,9 @@ #' #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) -#' with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, -#' September 2012 +#' with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, September 2012 +#' +#'@export CdarMultiPath<-function (R,ps,sample, geometric = TRUE,p = 0.95, ...) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-18 12:39:18 UTC (rev 2817) @@ -8,7 +8,7 @@ #' #' Modified Generalized Pareto Distribution is given by the following formula #' -#' \dqeqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} +#' \deqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} #' #' Here \eqn{\gamma{\epsilon}R} is the modifying parameter. When \eqn{\gamma<1} the corresponding densities are #' strictly decreasing with heavier tail; the GDP is recovered by setting \eqn{\gamma = 1} .\eqn{\gamma \textgreater 1} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-18 12:39:18 UTC (rev 2817) @@ -9,7 +9,7 @@ #' Maximum Drawdown is given by the formula #' When the distibution is normal #' -#' \deqn{MaxDD_{\alpha}=max\left\{0,\frac{(z_{\alpha}\sigma)^2}{4\mu}\right\}} +#' \deqn{MaxDD_\alpha=max\left\{0,\frac{(z_\alpha\sigma)^2}{4\mu}\right\}} #' #' The time at which the Maximum Drawdown occurs is given by #' \deqn{t^\ast=\biggl(\frac{Z_{\alpha}\sigma}{2\mu}\biggr)^2} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-18 12:39:18 UTC (rev 2817) @@ -34,6 +34,8 @@ #'chart.Penance(edhec,0.95) #' #'@references Bailey, David H. and Lopez de Prado, Marcos,Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). +#' +#'@export chart.Penance<-function(R,confidence,type=c("ar","normal"),reference.grid = TRUE,main=NULL,ylab = NULL,xlab = NULL,element.color="darkgrey",lwd = 2,pch = 1,cex = 1,cex.axis=0.8,cex.lab = 1,cex.main = 1,xlim = NULL,ylim = NULL,...){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-18 12:39:18 UTC (rev 2817) @@ -1,3 +1,23 @@ +#'@title +#' Time series of Rolling Economic Drawdown +#' +#'@description +#' This function plots the time series of Rolling Economic Drawdown. +#' For more details on rolling economic drawdown see \code{rollDrawdown}. +#' +#'@param R an xts, vector, matrix, data frame, timeseries, or zoo object of asset return. +#'@param Rf risk free rate can be vector such as government security rate of return +#'@param h lookback period +#'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining(FALSE) to aggregate returns, default is TRUE. +#'@param legend.loc set the legend.loc, as in \code{\link{plot}} +#'@param colorset set the colorset label, as in \code{\link{plot}} +#'@param \dots any other variable +#'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to +#'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) +#'@examples +#'chart.REDD(edhec,0.08,20) +#' + chart.REDD<-function(R,rf,h, geometric = TRUE,legend.loc = NULL, colorset = (1:12),...) { #DESCRIPTION: Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/man/DrawdownGPD.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/DrawdownGPD.Rd 2013-08-18 12:19:32 UTC (rev 2816) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/DrawdownGPD.Rd 2013-08-18 12:39:18 UTC (rev 2817) @@ -1,88 +0,0 @@ -\name{DrawdownGPD} -\alias{DrawdownGPD} -\title{Modelling Drawdown using Extreme Value Theory - -It has been shown empirically that Drawdowns can be modelled using Modified Generalized Pareto -distribution(MGPD), Generalized Pareto Distribution(GPD) and other particular cases of MGPD such -as weibull distribution \eqn{MGPD(\gamma,0,\psi)} and unit exponential distribution\eqn{MGPD(1,0,\psi)} - -Modified Generalized Pareto Distribution is given by the following formula - -\dqeqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} - -Here \eqn{\gamma{\epsilon}R} is the modifying parameter. When \eqn{\gamma<1} the corresponding densities are -strictly decreasing with heavier tail; the GDP is recovered by setting \eqn{\gamma = 1} .\eqn{\gamma \textgreater 1} - -The GDP is given by the following equation. \eqn{MGPD(1,\eta,\psi)} - -\deqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m}{\psi}}, if \eta = 0,\end{array}} - -The weibull distribution is given by the following equation \eqn{MGPD(\gamma,0,\psi)} - -\deqn{G(m) = 1- e^{-frac{m^\gamma}{\psi}}} - -The unit exponential distribution is given by the following equation \eqn{MGPD(1,0,\psi)} - -\deqn{G(m) = 1- e^{-m}}} -\usage{ - DrawdownGPD(R, type = c("gpd", "pd", "weibull"), - threshold = 0.9) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset return} - - \item{type}{The type of distribution - "gpd","pd","weibull"} - - \item{threshold}{The threshold beyond which the drawdowns - have to be modelled} -} -\description{ - Modelling Drawdown using Extreme Value Theory - - It has been shown empirically that Drawdowns can be - modelled using Modified Generalized Pareto - distribution(MGPD), Generalized Pareto Distribution(GPD) - and other particular cases of MGPD such as weibull - distribution \eqn{MGPD(\gamma,0,\psi)} and unit - exponential distribution\eqn{MGPD(1,0,\psi)} - - Modified Generalized Pareto Distribution is given by the - following formula - - \dqeqn{G_{\eta}(m) = \begin{array}{l} - 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 - \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} - - Here \eqn{\gamma{\epsilon}R} is the modifying parameter. - When \eqn{\gamma<1} the corresponding densities are - strictly decreasing with heavier tail; the GDP is - recovered by setting \eqn{\gamma = 1} .\eqn{\gamma - \textgreater 1} - - The GDP is given by the following equation. - \eqn{MGPD(1,\eta,\psi)} - - \deqn{G_{\eta}(m) = \begin{array}{l} - 1-(1+\eta\frac{m}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- - e^{-frac{m}{\psi}}, if \eta = 0,\end{array}} - - The weibull distribution is given by the following - equation \eqn{MGPD(\gamma,0,\psi)} - - \deqn{G(m) = 1- e^{-frac{m^\gamma}{\psi}}} - - The unit exponential distribution is given by the - following equation \eqn{MGPD(1,0,\psi)} - - \deqn{G(m) = 1- e^{-m}} -} -\references{ - Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum - Drawdown: Models and Applications (November 2003). - Coppead Working Paper Series No. 359. Available at SSRN: - http://ssrn.com/abstract=477322 or - http://dx.doi.org/10.2139/ssrn.477322. -} - From noreply at r-forge.r-project.org Sun Aug 18 15:19:59 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 15:19:59 +0200 (CEST) Subject: [Returnanalytics-commits] r2818 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man Message-ID: <20130818131959.4FE8A184B61@r-forge.r-project.org> Author: pulkit Date: 2013-08-18 15:19:58 +0200 (Sun, 18 Aug 2013) New Revision: 2818 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd Log: some modifications Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-18 13:19:58 UTC (rev 2818) @@ -1,21 +1,21 @@ export(AlphaDrawdown) export(BenchmarkSR) +export(CdarMultiPath) export(chart.BenchmarkSR) +export(chart.Penance) export(chart.SRIndifference) +export(DrawdownGPD) export(EconomicDrawdown) export(EDDCOPS) +export(golden_section) export(MinTrackRecord) +export(MonteSimulTriplePenance) +export(MultiBetaDrawdown) +export(ProbSharpeRatio) +export(PsrPortfolio) export(REDDCOPS) export(rollDrawdown) export(rollEconomicMax) -export(CDaR) -export(CdarMultiPath) -export(chart.Penance) -export(chart.REDD) -export(chart.SharpeEfficientFrontier) -export(BetaDrawdown) -export(MultiBetaDrawdown) -export(EDDCOPS) -export(DrawdownGPD) -export(golden_section) -export(MaxDD) +export(table.Penance) +export(table.PSR) +export(TuW) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-18 13:19:58 UTC (rev 2818) @@ -40,8 +40,7 @@ #'of Florida,September 2012. #' #'@examples -#' -#'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 +#'BetaDrawdown(edhec[,1],edhec[,2]) BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-18 13:19:58 UTC (rev 2818) @@ -37,6 +37,7 @@ #'@examples #'MultiBetaDrawdown(cbind(edhec,edhec),cbind(edhec[,2],edhec[,2]),sample = 2,ps=c(0.4,0.6)) #'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 +#'@export MultiBetaDrawdown<-function(R,Rm,sample,ps,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-18 13:19:58 UTC (rev 2818) @@ -34,7 +34,7 @@ #'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). Coppead Working Paper Series No. 359. #'Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. #' -#' +#'@export DrawdownGPD<-function(R,type=c("gpd","pd","weibull"),threshold=0.90){ x = checkData(R) columns = ncol(R) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R 2013-08-18 13:19:58 UTC (rev 2818) @@ -14,14 +14,13 @@ #' in which \eqn{x_2} is chosen. If \eqn{f(x_2)>f(x_1)} then the new three points would be \eqn{x_l \textless x_2 \textless x_1} else if #' \eqn{f(x_2)0$, such that $\pi_{t-1}<0$ and $\pi_t>0$. +#' For a particular sequence \eqn{\left\{\pi_t\right\}}, the time under water \eqn{(TuW)} +#' is the minimum number of observations, \eqn{t>0}, such that \eqn{\pi_{t-1}<0} and \eqn{\pi_t>0}. #' #' For a normal distribution Maximum Time Under Water is given by the following expression. #' \deqn{MaxTuW_\alpha=\biggl(\frac{Z_\alpha{\sigma}}{\mu}\biggr)^2} @@ -32,6 +32,7 @@ #' @examples #' TuW(edhec,0.95,"ar") #' TuW(edhec[,1],0.95,"normal") # expected value 103.2573 +#'@export TuW<-function(R,confidence,type=c("ar","normal"),...){ x = checkData(R) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R 2013-08-18 13:19:58 UTC (rev 2818) @@ -21,6 +21,7 @@ #'data(edhec) #'table.PSR(edhec[,1],0.20) #' +#'@export table.PSR<-function(R=NULL,refSR,Rf=0,p=0.95,weights = NULL,...){ if(!is.null(R)){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R 2013-08-18 13:19:58 UTC (rev 2818) @@ -8,6 +8,7 @@ #' @param confidence the confidence interval #' #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). +#' @export table.Penance<-function(R,confidence){ # DESCRIPTION: Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-18 13:19:58 UTC (rev 2818) @@ -65,7 +65,7 @@ the market performs well. } \examples{ -BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 +BetaDrawdown(edhec[,1],edhec[,2]) } \references{ Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd 2013-08-18 13:19:58 UTC (rev 2818) @@ -3,8 +3,6 @@ \title{Calculate the Economic Drawdown} \usage{ EconomicDrawdown(R, Rf, geometric = TRUE, ...) - - EconomicDrawdown(R, Rf, geometric = TRUE, ...) } \arguments{ \item{R}{an xts, vector, matrix, data frame, timeseries, @@ -18,18 +16,6 @@ default is TRUE} \item{\dots}{any other variable} - - \item{R}{an xts, vector, matrix, data frame, timeseries, - or zoo object of asset return.} - - \item{Rf}{risk free rate can be vector such as government - security rate of return} - - \item{geometric}{utilize geometric chaining (TRUE) or - simple/arithmetic chaining(FALSE) to aggregate returns, - default is TRUE} - - \item{\dots}{any other variable} } \description{ \code{EconomicDrawdown} calculates the Economic @@ -44,31 +30,13 @@ Here EM stands for Economic Max and is the code \code{\link{EconomicMax}} - - \code{EconomicDrawdown} calculates the Economic - Drawdown(EDD) for a return series.To calculate the - economic drawdown cumulative return and economic max is - calculated for each point. The risk free return(rf) is - taken as the input. - - Economic Drawdown is given by the equation - - \deqn{EDD(t)=1-\frac{W_t}/{EM(t)}} - - Here EM stands for Economic Max and is the code - \code{\link{EconomicMax}} } \examples{ EconomicDrawdown(edhec,0.08,100) -EconomicDrawdown(edhec,0.08,100) } \references{ Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) - - Yang, Z. George and Zhong, Liang, Optimal Portfolio - Strategy to Control Maximum Drawdown - The Case of Risk - Based Dynamic Asset Allocation (February 25, 2012) } Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd 2013-08-18 13:19:58 UTC (rev 2818) @@ -21,19 +21,20 @@ autoregressive. For a normal process Maximum Drawdown is given by the formula When the distibution is normal - \deqn{MaxDD_{\alpha}=max\left\{0,\frac{(z_{\alpha}\sigma)^2}{4\mu}\right\}} + \deqn{MaxDD_\alpha=max\left\{0,\frac{(z_\alpha\sigma)^2}{4\mu}\right\}} The time at which the Maximum Drawdown occurs is given by \deqn{t^\ast=\biggl(\frac{Z_{\alpha}\sigma}{2\mu}\biggr)^2} - Here $Z_{\alpha}$ is the critical value of the Standard - Normal Distribution associated with a probability - $\alpha$.$\sigma$ and $\mu$ are the Standard Distribution - and the mean respectively. When the distribution is - non-normal and time dependent, Autoregressive process. + Here \eqn{Z_{\alpha}} is the critical value of the + Standard Normal Distribution associated with a + probability \eqn{\alpha}.\eqn{\sigma} and \eqn{\mu} are + the Standard Distribution and the mean respectively. When + the distribution is non-normal and time dependent, + Autoregressive process. \deqn{Q_{\alpha,t}=\frac{\phi^{(t+1)}-\phi}{\phi-1}(\triangle\pi_0-\mu)+{\mu}t+Z_{\alpha}\frac{\sigma}{|\phi-1|}\biggl(\frac{\phi^{2(t+1)}-1}{\phi^2-1}-2\frac{\phi^(t+1)-1}{\phi-1}+t+1\biggr)^{1/2}} - $\phi$ is estimated as + \eqn{\phi} is estimated as \deqn{\hat{\phi} = Cov_0[\triangle\pi_\tau,\triangle\pi_{\tau-1}](Cov_0[\triangle\pi_{\tau-1},\triangle\pi_{\tau-1}])^{-1}} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd 2013-08-18 13:19:58 UTC (rev 2818) @@ -48,8 +48,8 @@ \deqn{MinTRL = n^\ast = 1+\biggl[1-\hat{\gamma_3}\hat{SR}+\frac{\hat{\gamma_4}}{4}\hat{SR^2}\biggr]\biggl(\frac{Z_\alpha}{\hat{SR}-SR^\ast}\biggr)^2} - $\gamma{_3}$ and $\gamma{_4}$ are the skewness and - kurtosis respectively. It is important to note that + \eqn{\gamma{_3}} and \eqn{\gamma{_4}} are the skewness + and kurtosis respectively. It is important to note that MinTRL is expressed in terms of number of observations, not annual or calendar terms. Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd 2013-08-18 13:19:58 UTC (rev 2818) @@ -47,11 +47,12 @@ \deqn{\hat{PSR}(SR^\ast) = Z\biggl[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast - + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} Here $n$ - is the track record length or the number of data points. - It can be daily,weekly or yearly depending on the input - given $\hat{\gamma{_3}}$ and $\hat{\gamma{_4}}$ are the - skewness and kurtosis respectively. + + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} Here + \eqn{n} is the track record length or the number of data + points. It can be daily,weekly or yearly depending on the + input given \eqn{\hat{\gamma{_3}}} and + \eqn{\hat{\gamma{_4}}} are the skewness and kurtosis + respectively. } \examples{ data(edhec) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd 2013-08-18 13:19:58 UTC (rev 2818) @@ -23,8 +23,8 @@ would like to find the vector of weights that maximize the expression - \deqn{\hat{PSR}(SR^\ast) = - Z\biggl[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast + \deqn{\hat{PSR}(SR^\**) = + Z\biggl[\frac{(\hat{SR}-SR^\**)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\** + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} where \eqn{\sigma = \sqrt{E[(r-\mu)^2]}} ,its standard @@ -32,14 +32,14 @@ its skewness, \eqn{\gamma_4=\frac{E\biggl[(r-\mu)^4\biggr]}{\sigma^4}} its kurtosis and \eqn{SR = \frac{\mu}{\sigma}} its Sharpe - Ratio. Because \eqn{\hat{PSR}(SR^\ast)=Z[\hat{Z^\ast}]} - is a monotonic increasing function of \eqn{\hat{Z^\ast}} - ,it suffices to compute the vector that maximizes - \eqn{\hat{Z^\ast}} + Ratio. Because \eqn{\hat{PSR}(SR^\**)=Z[\hat{Z^\**}]} is + a monotonic increasing function of \eqn{\hat{Z^\**}} ,it + suffices to compute the vector that maximizes + \eqn{\hat{Z^\**}} This optimal vector is invariant of the value adopted by - the parameter $SR^\ast$. Gradient Ascent Logic is used to - compute the weights using the Function PsrPortfolio + the parameter \eqn{SR^\**}. Gradient Ascent Logic is used + to compute the weights using the Function PsrPortfolio } \examples{ data(edhec) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd 2013-08-18 12:39:18 UTC (rev 2817) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd 2013-08-18 13:19:58 UTC (rev 2818) @@ -17,10 +17,10 @@ under water for a particular confidence interval is given by - For a particular sequence $\left\{\pi_t\right\}$, the - time under water $(TuW)$ is the minimum number of - observations, $t>0$, such that $\pi_{t-1}<0$ and - $\pi_t>0$. + For a particular sequence \eqn{\left\{\pi_t\right\}}, the + time under water \eqn{(TuW)} is the minimum number of + observations, \eqn{t>0}, such that \eqn{\pi_{t-1}<0} and + \eqn{\pi_t>0}. For a normal distribution Maximum Time Under Water is given by the following expression. From noreply at r-forge.r-project.org Sun Aug 18 16:02:19 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 16:02:19 +0200 (CEST) Subject: [Returnanalytics-commits] r2819 - pkg/PerformanceAnalytics/sandbox/pulkit/vignettes Message-ID: <20130818140219.30C1218106D@r-forge.r-project.org> Author: pulkit Date: 2013-08-18 16:02:14 +0200 (Sun, 18 Aug 2013) New Revision: 2819 Added: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.pdf Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw Log: PSR vignette Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw 2013-08-18 13:19:58 UTC (rev 2818) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw 2013-08-18 14:02:14 UTC (rev 2819) @@ -4,6 +4,7 @@ \IfFileExists{url.sty}{\usepackage{url}} {\newcommand{\url}{\texttt}} +\usepackage[utf8]{inputenc} \usepackage{babel} \usepackage{Rd} @@ -18,7 +19,6 @@ \SweaveOpts{concordance=TRUE} \title{ Probabilistic Sharpe Ratio Optimization } -\author{Guide : Prof. Parama Barai} % \keywords{Probabilistic Sharpe Ratio,Minimum Track Record Length,risk,benchmark,portfolio} @@ -48,19 +48,19 @@ <>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/week1/code/ProbSharpeRatio.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R") @ <>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/week1/code/MinTRL.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R") @ <>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/week1/code/PSRopt.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R") @ <>= -dyn.load("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/week1/code/moment.so") +dyn.load("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/src/moment.so") @ \section{Probabilistic Sharpe Ratio} @@ -70,7 +70,7 @@ Here $n$ is the track record length or the number of data points. It can be daily,weekly or yearly depending on the input given - $\hat{\gamma{_3}}$ and $\hat{\gamma{_4}}$ are the skewness and kurtosis respectively. + \eqn{\hat{\gamma{_3}}} and \eqn{\hat{\gamma{_4}}} are the skewness and kurtosis respectively. It is not unusual to find strategies with irregular trading frequencies, such as weekly strategies that may not trade for a month. This poses a problem when computing an annualized Sharpe ratio, and there is no consensus as how skill should be measured in the context of irregular bets. Because PSR measures skill in probabilistic terms, it is invariant to calendar conventions. All calculations are done in the original frequency of the data, and there is no annualization. The Reference Sharpe Ratio is also given in the non-annualized form and should be greater than the Observed Sharpe Ratio. @@ -82,12 +82,12 @@ \section{Minimum Track Record Length} If a track record is shorter than Minimum Track Record Length(MinTRL), we do -not have enough confidence that the observed $\hat{SR}$ is above the designated threshold -$SR^\ast$. Minimum Track Record Length is given by the following expression. +not have enough confidence that the observed \eqn{\hat{SR}} is above the designated threshold +\eqn{SR^\ast}. Minimum Track Record Length is given by the following expression. \deqn{MinTRL = n^\ast = 1+\biggl[1-\hat{\gamma_3}\hat{SR}+\frac{\hat{\gamma_4}}{4}\hat{SR^2}\biggr]\biggl(\frac{Z_\alpha}{\hat{SR}-SR^\ast}\biggr)^2} -$\gamma{_3}$ and $\gamma{_4}$ are the skewness and kurtosis respectively. It is important to note that MinTRL is expressed in terms of number of observations, not annual or calendar terms. All the values used in the above formula are non-annualized, in the same frequency as that of the returns. +\eqn{\gamma{_3}} and \eqn{\gamma{_4}} are the skewness and kurtosis respectively. It is important to note that MinTRL is expressed in terms of number of observations, not annual or calendar terms. All the values used in the above formula are non-annualized, in the same frequency as that of the returns. <<>>= data(edhec) @@ -100,11 +100,11 @@ \deqn{\hat{PSR}(SR^\ast) = Z\biggl[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} -where $\sigma = \sqrt{E[(r-\mu)^2]}$ ,its standard deviation.$\gamma_3=\frac{E\biggl[(r-\mu)^3\biggr]}{\sigma^3}$ its skewness,$\gamma_4=\frac{E\biggl[(r-\mu)^4\biggr]}{\sigma^4}$ its kurtosis and $SR = \frac{\mu}{\sigma}$ its Sharpe Ratio. +where \eqn{\sigma = \sqrt{E[(r-\mu)^2]}} ,its standard deviation.\eqn{\gamma_3=\frac{E\biggl[(r-\mu)^3\biggr]}{\sigma^3}} its skewness,\eqn{\gamma_4=\frac{E\biggl[(r-\mu)^4\biggr]}{\sigma^4}} its kurtosis and \eqn{SR = \frac{\mu}{\sigma}} its Sharpe Ratio. -Because $\hat{PSR}(SR^\ast)=Z[\hat{Z^\ast}]$ is a monotonic increasing function of -$\hat{Z^\ast}$ ,it suffices to compute the vector that maximizes $\hat{Z^\ast}$ - This optimal vector is invariant of the value adopted by the parameter $SR^\ast$. +Because \eqn{\hat{PSR}(SR^\ast)=Z[\hat{Z^\ast}]} is a monotonic increasing function of +\eqn{\hat{Z^\ast}} ,it suffices to compute the vector that maximizes \eqn{\hat{Z^\ast}} + This optimal vector is invariant of the value adopted by the parameter \eqn{SR^\ast}. <<>>= @@ -112,10 +112,5 @@ PsrPortfolio(edhec) @ -\section{Future Plans} - -To compare the existing portfolio optimization techniques with the present one for various data. Incorporate changes in the -Probabilistic Sharpe Ratio to improve the performance of the Portfolio. Plot the Sharpe Ratio Efficient Frontier and Capital Allocation curves to get a better understanding of the sharpe ratio obtained for different portfolios. - \end{document} Added: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream From noreply at r-forge.r-project.org Sun Aug 18 21:54:44 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 18 Aug 2013 21:54:44 +0200 (CEST) Subject: [Returnanalytics-commits] r2820 - pkg/PortfolioAnalytics/R Message-ID: <20130818195444.540B4184471@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-18 21:54:43 +0200 (Sun, 18 Aug 2013) New Revision: 2820 Modified: pkg/PortfolioAnalytics/R/constrained_objective.R pkg/PortfolioAnalytics/R/extractstats.R pkg/PortfolioAnalytics/R/generics.R pkg/PortfolioAnalytics/R/optimize.portfolio.R Log: - add objective_measures as a slot returned by optimize.portfolio for optimize_method=ROI \n - add the returns object as a slot returned by optimize.portfolio \n - add ETL and mETL as aliases for the ES function in constrained_objective \n - fixing print method for optimize.portfolio objects Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-18 14:02:14 UTC (rev 2819) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-18 19:54:43 UTC (rev 2820) @@ -542,6 +542,8 @@ mES =, CVaR =, cVaR =, + ETL=, + mETL=, ES = { fun = match.fun(ES) if(!inherits(objective,"risk_budget_objective") & is.null(objective$arguments$portfolio_method)& is.null(nargs$portfolio_method)) nargs$portfolio_method='single' Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-18 14:02:14 UTC (rev 2819) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-18 19:54:43 UTC (rev 2820) @@ -346,13 +346,8 @@ #' @export extractObjectiveMeasures <- function(object){ if(!inherits(object, "optimize.portfolio")) stop("object must be of class 'optimize.portfolio'") - if(inherits(object, "optimize.portfolio.ROI")){ - # objective measures returned as $out for ROI solvers - out <- object$out - } else { - # objective measures returned as $objective_measures for all other solvers - out <- object$objective_measures - } + # objective measures returned as $objective_measures for all other solvers + out <- object$objective_measures return(out) } Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-18 14:02:14 UTC (rev 2819) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-18 19:54:43 UTC (rev 2820) @@ -249,8 +249,14 @@ cat("\n") # get objective measure + objective_measures <- object$objective_measures + tmp_obj <- as.numeric(unlist(objective_measures)) + names(tmp_obj) <- names(objective_measures) cat("Objective Measure:\n") - print(as.numeric(object$out), digits=digits) + for(i in 1:length(objective_measures)){ + print(tmp_obj[i], digits=4) + cat("\n") + } cat("\n") } Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-18 14:02:14 UTC (rev 2819) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-18 19:54:43 UTC (rev 2820) @@ -696,31 +696,36 @@ if("var" %in% names(moments)){ # Minimize variance if the only objective specified is variance # Maximize Quadratic Utility if var and mean are specified as objectives - out <- gmv_opt(R=R, constraints=constraints, moments=moments, lambda=lambda, target=target) - out$call <- call + roi_result <- gmv_opt(R=R, constraints=constraints, moments=moments, lambda=lambda, target=target) + weights <- roi_result$weights + out <- list(weights=weights, objective_measures=suppressWarnings(constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures), out=roi_result$out, call=call) } if(length(names(moments)) == 1 & "mean" %in% names(moments)) { # Maximize return if the only objective specified is mean if(!is.null(constraints$max_pos)) { # This is an MILP problem if max_pos is specified as a constraint - out <- maxret_milp_opt(R=R, constraints=constraints, moments=moments, target=target) - out$call <- call + roi_result <- maxret_milp_opt(R=R, constraints=constraints, moments=moments, target=target) + weights <- roi_result$weights + out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=roi_result$out, call=call) } else { # Maximize return LP problem - out <- maxret_opt(R=R, constraints=constraints, moments=moments, target=target) - out$call <- call + roi_result <- maxret_opt(R=R, constraints=constraints, moments=moments, target=target) + weights <- roi_result$weights + out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=roi_result$out, call=call) } } if( any(c("CVaR", "ES", "ETL") %in% names(moments)) ) { # Minimize sample ETL/ES/CVaR if CVaR, ETL, or ES is specified as an objective if(!is.null(constraints$max_pos)) { # This is an MILP problem if max_pos is specified as a constraint - out <- etl_milp_opt(R=R, constraints=constraints, moments=moments, target=target, alpha=alpha) - out$call <- call + roi_result <- etl_milp_opt(R=R, constraints=constraints, moments=moments, target=target, alpha=alpha) + weights <- roi_result$weights + out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=roi_result$out, call=call) } else { # Minimize sample ETL/ES/CVaR LP Problem - out <- etl_opt(R=R, constraints=constraints, moments=moments, target=target, alpha=alpha) - out$call <- call + roi_result <- etl_opt(R=R, constraints=constraints, moments=moments, target=target, alpha=alpha) + weights <- roi_result$weights + out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=roi_result$out, call=call) } } } ## end case for ROI @@ -821,6 +826,7 @@ # print(c("elapsed time:",round(end_t-start_t,2),":diff:",round(diff,2), ":stats: ", round(out$stats,4), ":targets:",out$targets)) if(message) message(c("elapsed time:", end_t-start_t)) out$portfolio <- portfolio + out$R <- R out$data_summary <- list(first=first(R), last=last(R)) out$elapsed_time <- end_t - start_t out$end_t <- as.character(Sys.time()) From noreply at r-forge.r-project.org Mon Aug 19 06:50:33 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 06:50:33 +0200 (CEST) Subject: [Returnanalytics-commits] r2821 - in pkg/PortfolioAnalytics: R demo Message-ID: <20130819045033.3F89118524B@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-19 06:50:32 +0200 (Mon, 19 Aug 2013) New Revision: 2821 Modified: pkg/PortfolioAnalytics/R/objective.R pkg/PortfolioAnalytics/R/optimize.portfolio.R pkg/PortfolioAnalytics/demo/testing_ROI.R Log: modifying objectives and optimize.portfolio for backwards compatibility with v1_constraint object. No longer need to use _v1 with functions Modified: pkg/PortfolioAnalytics/R/objective.R =================================================================== --- pkg/PortfolioAnalytics/R/objective.R 2013-08-18 19:54:43 UTC (rev 2820) +++ pkg/PortfolioAnalytics/R/objective.R 2013-08-19 04:50:32 UTC (rev 2821) @@ -121,7 +121,11 @@ #' @rdname add.objective #' @name add.objective #' @export -add.objective_v2 <- function(portfolio, type, name, arguments=NULL, enabled=TRUE, ..., indexnum=NULL){ +add.objective_v2 <- function(portfolio, constraints=NULL, type, name, arguments=NULL, enabled=TRUE, ..., indexnum=NULL){ + if(!is.null(constraints) & inherits(constraints, "v1_constraint")){ + return(add.objective_v1(constraints=constraints, type=type, name=name, arguments=arguments, enabled=enabled, ...=..., indexnum=indexnum)) + } + # This function is based on the original add.objective function, but modified # to add objectives to a portfolio object instead of a constraint object. if (!is.portfolio(portfolio)) {stop("portfolio passed in is not of class portfolio")} Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-18 19:54:43 UTC (rev 2820) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-19 04:50:32 UTC (rev 2821) @@ -430,7 +430,7 @@ #' @export optimize.portfolio_v2 <- function( R, - portfolio, + portfolio=NULL, constraints=NULL, objectives=NULL, optimize_method=c("DEoptim","random","ROI","ROI_old","pso","GenSA"), @@ -448,22 +448,20 @@ #store the call for later call <- match.call() - if (is.null(portfolio) | !is.portfolio(portfolio)){ - stop("you must pass in an object of class portfolio to control the optimization") + if (!is.null(portfolio) & !is.portfolio(portfolio)){ + stop("you must pass in an object of class 'portfolio' to control the optimization") } - R <- checkData(R) - N <- length(portfolio$assets) - if (ncol(R) > N) { - R <- R[,names(portfolio$assets)] - } - T <- nrow(R) - # Check for constraints and objectives passed in separately outside of the portfolio object if(!is.null(constraints)){ if(inherits(constraints, "v1_constraint")){ - warning("constraint object passed in is a 'v1_constraint' object, updating to v2 specification") - portfolio <- update_constraint_v1tov2(portfolio=portfolio, v1_constraint=constraints) + if(is.null(portfolio)){ + # If the user has not passed in a portfolio, we will create one for them + tmp_portf <- portfolio.spec(assets=constraints$assets) + } + message("constraint object passed in is a 'v1_constraint' object, updating to v2 specification") + portfolio <- update_constraint_v1tov2(portfolio=tmp_portf, v1_constraint=constraints) + # print.default(portfolio) } if(!inherits(constraints, "v1_constraint")){ # Insert the constraints into the portfolio object @@ -475,6 +473,13 @@ portfolio <- insert_objectives(portfolio=portfolio, objectives=objectives) } + R <- checkData(R) + N <- length(portfolio$assets) + if (ncol(R) > N) { + R <- R[,names(portfolio$assets)] + } + T <- nrow(R) + out <- list() weights <- NULL Modified: pkg/PortfolioAnalytics/demo/testing_ROI.R =================================================================== --- pkg/PortfolioAnalytics/demo/testing_ROI.R 2013-08-18 19:54:43 UTC (rev 2820) +++ pkg/PortfolioAnalytics/demo/testing_ROI.R 2013-08-19 04:50:32 UTC (rev 2821) @@ -12,6 +12,10 @@ library(Ecdat) library(PortfolioAnalytics) +var.portfolio <- function(R, weights){ + weights <- matrix(weights, ncol=1) + return(as.numeric(t(weights) %*% var(R) %*% weights)) +} # General Parameters for sample code data(edhec) @@ -20,9 +24,9 @@ N <- length(funds) gen.constr <- constraint(assets = colnames(edhec), min=-Inf, max =Inf, min_sum=1, max_sum=1, risk_aversion=1) -gen.constr <- add.objective_v1(constraints=gen.constr, type="return", name="mean", enabled=FALSE, multiplier=0, target=mu.port) -gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="var", enabled=FALSE, multiplier=0, risk_aversion=10) -gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE, multiplier=0) +gen.constr <- add.objective(constraints=gen.constr, type="return", name="mean", enabled=FALSE, multiplier=0, target=mu.port) +gen.constr <- add.objective(constraints=gen.constr, type="risk", name="var", enabled=FALSE, multiplier=0, risk_aversion=10) +gen.constr <- add.objective(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE, multiplier=0) # ===================== @@ -33,7 +37,7 @@ max.port$max <- rep(0.30,N) max.port$objectives[[1]]$enabled <- TRUE max.port$objectives[[1]]$target <- NULL -max.solution <- optimize.portfolio_v1(edhec, max.port, "ROI") +max.solution <- optimize.portfolio(R=edhec, constraints=max.port, optimize_method="ROI") # ===================== @@ -42,7 +46,7 @@ gmv.port <- gen.constr gmv.port$objectives[[2]]$enabled <- TRUE gmv.port$objectives[[2]]$risk_aversion <- 1 -gmv.solution <- optimize.portfolio_v1(edhec, gmv.port, "ROI") +gmv.solution <- optimize.portfolio(R=edhec, constraints=gmv.port, optimize_method="ROI") # ======================== @@ -51,9 +55,9 @@ target.port <- gen.constr target.port$objectives[[1]]$enabled <- TRUE target.port$objectives[[2]]$enabled <- TRUE -target.solution <- optimize.portfolio_v1(edhec, target.port, "ROI") +target.solution <- optimize.portfolio(R=edhec, constraints=target.port, optimize_method="ROI") - +target.solution$weights %*% var(edhec) %*% target.solution$weights # ======================== # Mean-variance: Maximize quadratic utility, dollar-neutral, target portfolio return # @@ -62,7 +66,7 @@ dollar.neu.port$max_sum <- 0 dollar.neu.port$objectives[[1]]$enabled <- TRUE dollar.neu.port$objectives[[2]]$enabled <- TRUE -dollar.neu.solution <- optimize.portfolio_v1(edhec, dollar.neu.port, "ROI") +dollar.neu.solution <- optimize.portfolio(R=edhec, constraints=dollar.neu.port, optimize_method="ROI") # ======================== @@ -71,7 +75,7 @@ cvar.port <- gen.constr cvar.port$objectives[[1]]$enabled <- TRUE cvar.port$objectives[[3]]$enabled <- TRUE -cvar.solution <- optimize.portfolio_v1(edhec, cvar.port, "ROI") +cvar.solution <- optimize.portfolio(R=edhec, constraints=cvar.port, optimize_method="ROI") # ===================== @@ -84,7 +88,7 @@ groups.port$cUP <- rep(0.30,length(groups)) groups.port$objectives[[2]]$enabled <- TRUE groups.port$objectives[[2]]$risk_aversion <- 1 -groups.solution <- optimize.portfolio_v1(edhec, groups.port, "ROI") +groups.solution <- optimize.portfolio(R=edhec, constraints=groups.port, optimize_method="ROI") # ======================== @@ -97,5 +101,5 @@ group.cvar.port$cUP <- rep(0.30,length(groups)) group.cvar.port$objectives[[1]]$enabled <- TRUE group.cvar.port$objectives[[3]]$enabled <- TRUE -group.cvar.solution <- optimize.portfolio_v1(edhec, group.cvar.port, "ROI") +group.cvar.solution <- optimize.portfolio(R=edhec, constraints=group.cvar.port, optimize_method="ROI") From noreply at r-forge.r-project.org Mon Aug 19 07:04:51 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 07:04:51 +0200 (CEST) Subject: [Returnanalytics-commits] r2822 - in pkg/PortfolioAnalytics: R demo Message-ID: <20130819050451.9379418524B@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-19 07:04:50 +0200 (Mon, 19 Aug 2013) New Revision: 2822 Modified: pkg/PortfolioAnalytics/R/constrained_objective.R pkg/PortfolioAnalytics/R/objectiveFUN.R pkg/PortfolioAnalytics/demo/testing_ROI.R Log: adding var.portfolio function to calculate portfolio variance via constrained_objective when 'var' is an objective Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-19 04:50:32 UTC (rev 2821) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-19 05:04:50 UTC (rev 2822) @@ -532,6 +532,9 @@ StdDev = { fun = match.fun(StdDev) }, + var = { + fun = match.fun(var.portfolio) + }, mVaR =, VaR = { fun = match.fun(VaR) Modified: pkg/PortfolioAnalytics/R/objectiveFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/objectiveFUN.R 2013-08-19 04:50:32 UTC (rev 2821) +++ pkg/PortfolioAnalytics/R/objectiveFUN.R 2013-08-19 05:04:50 UTC (rev 2822) @@ -19,4 +19,20 @@ if(length(weights) != length(wts.init)) stop("weights and wts.init are not the same length") return(sum(abs(wts.init - weights)) / N) -} \ No newline at end of file +} + +#' Calculate portfolio variance +#' +#' This function is used to calculate the portfolio variance via a call to +#' constrained_objective when var is an object for mean variance or quadratic +#' utility optimization. +#' +#' @param R xts object of asset returns +#' @param weights vector of asset weights +#' @return numeric value of the portfolio variance +#' @author Ross Bennett +#' @export +var.portfolio <- function(R, weights){ + weights <- matrix(weights, ncol=1) + return(as.numeric(t(weights) %*% var(R) %*% weights)) +} Modified: pkg/PortfolioAnalytics/demo/testing_ROI.R =================================================================== --- pkg/PortfolioAnalytics/demo/testing_ROI.R 2013-08-19 04:50:32 UTC (rev 2821) +++ pkg/PortfolioAnalytics/demo/testing_ROI.R 2013-08-19 05:04:50 UTC (rev 2822) @@ -12,11 +12,6 @@ library(Ecdat) library(PortfolioAnalytics) -var.portfolio <- function(R, weights){ - weights <- matrix(weights, ncol=1) - return(as.numeric(t(weights) %*% var(R) %*% weights)) -} - # General Parameters for sample code data(edhec) funds <- names(edhec) @@ -57,7 +52,7 @@ target.port$objectives[[2]]$enabled <- TRUE target.solution <- optimize.portfolio(R=edhec, constraints=target.port, optimize_method="ROI") -target.solution$weights %*% var(edhec) %*% target.solution$weights + # ======================== # Mean-variance: Maximize quadratic utility, dollar-neutral, target portfolio return # From noreply at r-forge.r-project.org Mon Aug 19 07:23:27 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 07:23:27 +0200 (CEST) Subject: [Returnanalytics-commits] r2823 - pkg/PortfolioAnalytics/demo Message-ID: <20130819052327.7EFCA185156@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-19 07:23:27 +0200 (Mon, 19 Aug 2013) New Revision: 2823 Modified: pkg/PortfolioAnalytics/demo/constrained_optim.R pkg/PortfolioAnalytics/demo/sortino.R pkg/PortfolioAnalytics/demo/testing_GenSA.R pkg/PortfolioAnalytics/demo/testing_pso.R Log: modifying demo scripts that use the v1 constraints object so that they no longer use functions with a _v1 suffix Modified: pkg/PortfolioAnalytics/demo/constrained_optim.R =================================================================== --- pkg/PortfolioAnalytics/demo/constrained_optim.R 2013-08-19 05:04:50 UTC (rev 2822) +++ pkg/PortfolioAnalytics/demo/constrained_optim.R 2013-08-19 05:23:27 UTC (rev 2823) @@ -1,13 +1,16 @@ require("PerformanceAnalytics") require("PortfolioAnalytics") require("DEoptim") + +# Load the data data(edhec) -pspec <- portfolio.spec(assets=colnames(edhec[, 1:10])) -constraints=constraint(assets = colnames(edhec[, 1:10]), min = 0.01, max = 0.4, min_sum=1, max_sum=1, weight_seq = generatesequence()) + +#constraints +constraints <- constraint(assets = colnames(edhec[, 1:10]), min = 0.01, max = 0.4, min_sum=1, max_sum=1, weight_seq = generatesequence()) # note that if you wanted to do a random portfolio optimization, mun_sum of .99 and max_sum of 1.01 might be more appropriate -constraints<-add.objective_v1(constraints, type="return", name="mean", arguments=list(), enabled=TRUE) -constraints<-add.objective_v1(constraints, type="risk_budget", name="ES", arguments=list(), enabled=TRUE, p=.95, min_prisk=.05, max_prisk=.15) -#constraints +constraints <- add.objective(constraints=constraints, type="return", name="mean", arguments=list(), enabled=TRUE) +constraints <- add.objective(constraints=constraints, type="risk_budget", name="ES", arguments=list(), enabled=TRUE, p=.95, min_prisk=.05, max_prisk=.15) + #now set some additional bits # I should have set the multiplier for returns to negative constraints$objectives[[1]]$multiplier=-1 @@ -18,20 +21,21 @@ print("We'll use a search_size parameter of 1000 for this demo, but realistic portfolios will likely require search_size parameters much larger, the default is 20000 which is almost always large enough for any realistic portfolio and constraints, but will take substantially longer to run.") # look for a solution using both DEoptim and random portfolios -opt_out<-optimize.portfolio_v1(R=edhec[,1:10], constraints=constraints, optimize_method="DEoptim", search_size=1000, trace=TRUE) +opt_out <- optimize.portfolio(R=edhec[,1:10], constraints=constraints, optimize_method="DEoptim", search_size=1000, trace=TRUE) + #we need a little more wiggle in min/max sum for random portfolios or it takes too long to converge -constraints$min_sum<-.99 -constraints$max_sum<-1.01 -opt_out_random<-optimize.portfolio_v1(R=edhec[,1:10], constraints=constraints, optimize_method="random", search_size=1000, trace=TRUE) +constraints$min_sum <- 0.99 +constraints$max_sum <- 1.01 +opt_out_random <- optimize.portfolio(R=edhec[,1:10], constraints=constraints, optimize_method="random", search_size=1000, trace=TRUE) # now lets try a portfolio that rebalances quarterly -opt_out_rebalancing<-optimize.portfolio.rebalancing_v1(R=edhec[,1:10], constraints, optimize_method="DEoptim", search_size=1000, trace=FALSE,rebalance_on='quarters') -rebalancing_weights<-matrix(nrow=length(opt_out_rebalancing),ncol=length(opt_out_rebalancing[[1]]]$weights)) -rownames(rebalancing_weights)<-names(opt_out_rebalancing) -colnames(rebalancing_weights)<-names(opt_out_rebalancing[[1]]$weights) -for(period in 1:length(opt_out_rebalancing)) rebalancing_weights[period,]<-opt_out_rebalancing[[period]]$weights -rebalancing_returns<-Return.rebalancing(R=edhec,weights=rebalancing_weights) +opt_out_rebalancing <- optimize.portfolio.rebalancing_v1(R=edhec[,1:10], constraints=constraints, optimize_method="DEoptim", search_size=1000, trace=FALSE,rebalance_on='quarters') +rebalancing_weights <- matrix(nrow=length(opt_out_rebalancing),ncol=length(opt_out_rebalancing[[1]]]$weights)) +rownames(rebalancing_weights) <- names(opt_out_rebalancing) +colnames(rebalancing_weights) <- names(opt_out_rebalancing[[1]]$weights) +for(period in 1:length(opt_out_rebalancing)) rebalancing_weights[period,] <- opt_out_rebalancing[[period]]$weights +rebalancing_returns <- Return.rebalancing(R=edhec,weights=rebalancing_weights) charts.PerformanceSummary(rebalancing_returns) # and now lets rebalance quarterly with 48 mo trailing -opt_out_trailing<-optimize.portfolio.rebalancing_v1(R=edhec[,1:10], constraints, optimize_method="DEoptim", search_size=1000, trace=FALSE,rebalance_on='quarters',trailing_periods=48,training_period=48) \ No newline at end of file +opt_out_trailing<-optimize.portfolio.rebalancing(R=edhec[,1:10], constraints=constraints, optimize_method="DEoptim", search_size=1000, trace=FALSE,rebalance_on='quarters',trailing_periods=48,training_period=48) Modified: pkg/PortfolioAnalytics/demo/sortino.R =================================================================== --- pkg/PortfolioAnalytics/demo/sortino.R 2013-08-19 05:04:50 UTC (rev 2822) +++ pkg/PortfolioAnalytics/demo/sortino.R 2013-08-19 05:23:27 UTC (rev 2823) @@ -32,11 +32,11 @@ #'# Example 1 maximize Sortino Ratio SortinoConstr <- constraint(assets = colnames(indexes[,1:4]), min = 0.05, max = 1, min_sum=.99, max_sum=1.01, weight_seq = generatesequence(by=.001)) -SortinoConstr <- add.objective_v1(SortinoConstr, type="return", name="SortinoRatio", enabled=TRUE, arguments = list(MAR=MAR)) -SortinoConstr <- add.objective_v1(SortinoConstr, type="return", name="mean", enabled=TRUE, multiplier=0) # multiplier 0 makes it availble for plotting, but not affect optimization +SortinoConstr <- add.objective(constraints=SortinoConstr, type="return", name="SortinoRatio", enabled=TRUE, arguments = list(MAR=MAR)) +SortinoConstr <- add.objective(constraints=SortinoConstr, type="return", name="mean", enabled=TRUE, multiplier=0) # multiplier 0 makes it availble for plotting, but not affect optimization ### Use random portfolio engine -SortinoResult<-optimize.portfolio_v1(R=indexes[,1:4], constraints=SortinoConstr, optimize_method='random', search_size=2000, trace=TRUE, verbose=TRUE) +SortinoResult<-optimize.portfolio(R=indexes[,1:4], constraints=SortinoConstr, optimize_method='random', search_size=2000, trace=TRUE, verbose=TRUE) plot(SortinoResult, risk.col='SortinoRatio') ### alternately, Use DEoptim engine @@ -44,7 +44,7 @@ #plot(SortinoResultDE, risk.col='SortinoRatio') ### now rebalance quarterly -SortinoRebalance <- optimize.portfolio.rebalancing_v1(R=indexes[,1:4], constraints=SortinoConstr, optimize_method="random", trace=TRUE, rebalance_on='quarters', trailing_periods=NULL, training_period=36, search_size=2000) +SortinoRebalance <- optimize.portfolio.rebalancing(R=indexes[,1:4], constraints=SortinoConstr, optimize_method="random", trace=TRUE, rebalance_on='quarters', trailing_periods=NULL, training_period=36, search_size=2000) ############################################################################### # R (http://r-project.org/) Numeric Methods for Optimization of Portfolios Modified: pkg/PortfolioAnalytics/demo/testing_GenSA.R =================================================================== --- pkg/PortfolioAnalytics/demo/testing_GenSA.R 2013-08-19 05:04:50 UTC (rev 2822) +++ pkg/PortfolioAnalytics/demo/testing_GenSA.R 2013-08-19 05:23:27 UTC (rev 2823) @@ -23,10 +23,10 @@ mu.port <- mean(colMeans(R)) gen.constr <- constraint(assets = funds, min=-2, max=2, min_sum=0.99, max_sum=1.01, risk_aversion=1) -gen.constr <- add.objective_v1(constraints=gen.constr, type="return", name="mean", enabled=FALSE, target=mu.port) -gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="var", enabled=FALSE, risk_aversion=10) -gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE) -gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="sd", enabled=FALSE) +gen.constr <- add.objective(constraints=gen.constr, type="return", name="mean", enabled=FALSE, target=mu.port) +gen.constr <- add.objective(constraints=gen.constr, type="risk", name="var", enabled=FALSE, risk_aversion=10) +gen.constr <- add.objective(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE) +gen.constr <- add.objective(constraints=gen.constr, type="risk", name="sd", enabled=FALSE) # ===================== @@ -37,14 +37,14 @@ max.port$objectives[[1]]$enabled <- TRUE max.port$objectives[[1]]$target <- NULL max.port$objectives[[1]]$multiplier <- -1 -max.solution <- optimize.portfolio_v1(R, max.port, "GenSA", trace=TRUE) +max.solution <- optimize.portfolio(R=R, constraints=max.port, optimize_method="GenSA", trace=TRUE) # ===================== # Mean-variance: Fully invested, Global Minimum Variance Portfolio gmv.port <- gen.constr gmv.port$objectives[[4]]$enabled <- TRUE -gmv.solution <- optimize.portfolio_v1(R, gmv.port, "GenSA", trace=TRUE) +gmv.solution <- optimize.portfolio(R=R, constraints=gmv.port, optimize_method="GenSA", trace=TRUE) @@ -56,7 +56,7 @@ cvar.port$max <- rep(1,N) cvar.port$objectives[[3]]$enabled <- TRUE cvar.port$objectives[[3]]$arguments <- list(p=0.95, clean="boudt") -cvar.solution <- optimize.portfolio_v1(R, cvar.port, "GenSA") +cvar.solution <- optimize.portfolio(R=R, constraints=cvar.port, optimize_method="GenSA") Modified: pkg/PortfolioAnalytics/demo/testing_pso.R =================================================================== --- pkg/PortfolioAnalytics/demo/testing_pso.R 2013-08-19 05:04:50 UTC (rev 2822) +++ pkg/PortfolioAnalytics/demo/testing_pso.R 2013-08-19 05:23:27 UTC (rev 2823) @@ -22,10 +22,10 @@ mu.port <- mean(colMeans(R)) gen.constr <- constraint(assets = funds, min=-2, max=2, min_sum=0.99, max_sum=1.01, risk_aversion=1) -gen.constr <- add.objective_v1(constraints=gen.constr, type="return", name="mean", enabled=FALSE, target=mu.port) -gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="var", enabled=FALSE, risk_aversion=10) -gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE) -gen.constr <- add.objective_v1(constraints=gen.constr, type="risk", name="sd", enabled=FALSE) +gen.constr <- add.objective(constraints=gen.constr, type="return", name="mean", enabled=FALSE, target=mu.port) +gen.constr <- add.objective(constraints=gen.constr, type="risk", name="var", enabled=FALSE, risk_aversion=10) +gen.constr <- add.objective(constraints=gen.constr, type="risk", name="CVaR", enabled=FALSE) +gen.constr <- add.objective(constraints=gen.constr, type="risk", name="sd", enabled=FALSE) # ===================== @@ -36,14 +36,14 @@ max.port$objectives[[1]]$enabled <- TRUE max.port$objectives[[1]]$target <- NULL max.port$objectives[[1]]$multiplier <- -1 -max.solution <- optimize.portfolio_v1(R, max.port, "pso", trace=TRUE) +max.solution <- optimize.portfolio(R=R, constraints=max.port, optimize_method="pso", trace=TRUE) # ===================== # Mean-variance: Fully invested, Global Minimum Variance Portfolio gmv.port <- gen.constr gmv.port$objectives[[4]]$enabled <- TRUE -gmv.solution <- optimize.portfolio_v1(R, gmv.port, "pso", trace=TRUE) +gmv.solution <- optimize.portfolio(R=R, constraints=gmv.port, optimize_method="pso", trace=TRUE) @@ -55,7 +55,7 @@ cvar.port$max <- rep(1,N) cvar.port$objectives[[3]]$enabled <- TRUE cvar.port$objectives[[3]]$arguments <- list(p=0.95, clean="boudt") -cvar.solution <- optimize.portfolio_v1(R, cvar.port, "pso", trace=TRUE) +cvar.solution <- optimize.portfolio(R=R, constraints=cvar.port, optimize_method="pso", trace=TRUE) From noreply at r-forge.r-project.org Mon Aug 19 07:28:55 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 07:28:55 +0200 (CEST) Subject: [Returnanalytics-commits] r2824 - pkg/PortfolioAnalytics/R Message-ID: <20130819052855.5C257185156@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-19 07:28:55 +0200 (Mon, 19 Aug 2013) New Revision: 2824 Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R Log: adding backwards compatibility to optimize.portfolio.rebalancingto accept a v1_constraint object Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-19 05:23:27 UTC (rev 2823) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-19 05:28:55 UTC (rev 2824) @@ -997,15 +997,30 @@ #' @author Kris Boudt, Peter Carl, Brian G. Peterson #' @name optimize.portfolio.rebalancing #' @export -optimize.portfolio.rebalancing <- function(R, portfolio, constraints=NULL, objectives=NULL, optimize_method=c("DEoptim","random","ROI"), search_size=20000, trace=FALSE, ..., rp=NULL, rebalance_on=NULL, training_period=NULL, trailing_periods=NULL) +optimize.portfolio.rebalancing <- function(R, portfolio=NULL, constraints=NULL, objectives=NULL, optimize_method=c("DEoptim","random","ROI"), search_size=20000, trace=FALSE, ..., rp=NULL, rebalance_on=NULL, training_period=NULL, trailing_periods=NULL) { stopifnot("package:foreach" %in% search() || require("foreach",quietly=TRUE)) start_t<-Sys.time() + if (!is.null(portfolio) & !is.portfolio(portfolio)){ + stop("you must pass in an object of class 'portfolio' to control the optimization") + } + # Check for constraints and objectives passed in separately outside of the portfolio object if(!is.null(constraints)){ - # Insert the constraints into the portfolio object - portfolio <- insert_constraints(portfolio=portfolio, constraints=constraints) + if(inherits(constraints, "v1_constraint")){ + if(is.null(portfolio)){ + # If the user has not passed in a portfolio, we will create one for them + tmp_portf <- portfolio.spec(assets=constraints$assets) + } + message("constraint object passed in is a 'v1_constraint' object, updating to v2 specification") + portfolio <- update_constraint_v1tov2(portfolio=tmp_portf, v1_constraint=constraints) + # print.default(portfolio) + } + if(!inherits(constraints, "v1_constraint")){ + # Insert the constraints into the portfolio object + portfolio <- insert_constraints(portfolio=portfolio, constraints=constraints) + } } if(!is.null(objectives)){ # Insert the objectives into the portfolio object From noreply at r-forge.r-project.org Mon Aug 19 11:18:25 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 11:18:25 +0200 (CEST) Subject: [Returnanalytics-commits] r2825 - pkg/PerformanceAnalytics/sandbox/Shubhankit/R Message-ID: <20130819091825.7E968181059@r-forge.r-project.org> Author: shubhanm Date: 2013-08-19 11:18:25 +0200 (Mon, 19 Aug 2013) New Revision: 2825 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/inst/ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/man/ Log: From noreply at r-forge.r-project.org Mon Aug 19 15:08:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 15:08:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2826 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man vignettes Message-ID: <20130819130810.936C8184468@r-forge.r-project.org> Author: pulkit Date: 2013-08-19 15:08:10 +0200 (Mon, 19 Aug 2013) New Revision: 2826 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw Log: check Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-19 13:08:10 UTC (rev 2826) @@ -9,7 +9,8 @@ xts, PerformanceAnalytics Suggests: - PortfolioAnalytics + PortfolioAnalytics, + lpSolve Maintainer: Brian G. Peterson Description: GSoC 2013 project to replicate literature on drawdowns and non-i.i.d assumptions in finance. Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-19 13:08:10 UTC (rev 2826) @@ -1,5 +1,6 @@ export(AlphaDrawdown) export(BenchmarkSR) +export(CDaR) export(CdarMultiPath) export(chart.BenchmarkSR) export(chart.Penance) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -29,7 +29,8 @@ #'"sharpe","correlation" or "strategies" #'@param ylim set the ylim value, as in \code{\link{plot}} #'@param xlim set the xlim value, as in \code{\link{plot}} - +#' +#'@author Pulkit Mehrotra #'@references #'Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: #'A Sharpe Ratio Indifference Curve Approach (January 2013). Algorithmic Finance, Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -15,6 +15,7 @@ #' #'@param R a vector, matrix, data frame,timeseries or zoo object of asset returns #' +#'@author Pulkit Mehrotra #'@references #'Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: #'A Sharpe Ratio Indifference Curve Approach (January 2013). Algorithmic Finance, @@ -66,4 +67,4 @@ # # $Id: BenchmarkSR.R $ # -############################################################################### \ No newline at end of file +############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -29,6 +29,7 @@ #'@param p confidence level for calculation ,default(p=0.95) #'@param \dots any other passthru parameters #' +#'@author Pulkit Mehrotra #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) #' with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, September 2012 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -1,3 +1,36 @@ +#'@title +#' Calculate Uryasev's proposed Conditional Drawdown at Risk (CDD or CDaR) +#' measure +#' +#' @description +#' For some confidence level \eqn{p}, the conditional drawdown is the the mean +#' of the worst \eqn{p\%} drawdowns. +#' +#' @aliases CDD CDaR +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @param weights portfolio weighting vector, default NULL, see Details +#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, +#' default TRUE +#' @param invert TRUE/FALSE whether to invert the drawdown measure. see +#' Details. +#' @param p confidence level for calculation, default p=0.95 +#' @param \dots any other passthru parameters +#' @author Brian G. Peterson +#' @seealso \code{\link{ES}} \code{\link{maxDrawdown}} +#' @references Chekhlov, A., Uryasev, S., and M. Zabarankin. Portfolio +#' Optimization With Drawdown Constraints. B. Scherer (Ed.) Asset and Liability +#' Management Tools, Risk Books, London, 2003 +#' http://www.ise.ufl.edu/uryasev/drawdown.pdf +#' @keywords ts multivariate distribution models +#' @examples +#' library(lpSolve) +#' data(edhec) +#' t(round(CDaR(edhec),4)) +#' +#' @export + + CDaR<-function (R, weights = NULL, geometric = TRUE, invert = TRUE, p = 0.95, ...) { #p = .setalphaprob(p) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -34,6 +34,7 @@ #' alpha = 0 and if "max" then alpha = 1 is taken. #'@param \dots any passthru variable. #' +#'@author Pulkit Mehrotra #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model #'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -29,6 +29,7 @@ #' alpha = 0 and if "max" then alpha = 1 is taken. #'@param \dots any passthru variable. #' +#'@author Pulkit Mehrotra #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model #'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -25,16 +25,19 @@ #' @param type The type of BetaDrawdown if specified alpha then the alpha value given is taken (default 0.95). If "average" then alpha = 0 and if "max" then alpha = 1 is taken. #'@param \dots any passthru variable #' +#'@author Pulkit Mehrotra #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model #'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University #'of Florida,September 2012. #'@examples -#'AlphaDrawdown(edhec[,1],edhec[,2]) ## expected value : 0.5141929 +#'data(edhec) +#'AlphaDrawdown(edhec[,1],edhec[,2]) #' -#'AlphaDrawdown(edhec[,1],edhec[,2],type="max") ## expected value : 0.8983177 +#'AlphaDrawdown(edhec[,1],edhec[,2],type="max") # expected value : 0.8983177 #' -#'AlphaDrawdown(edhec[,1],edhec[,2],type="average") ## expected value : 1.692592 +#'AlphaDrawdown(edhec[,1],edhec[,2],type="average") # expected value : 1.692592 +#' #'@export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -18,6 +18,7 @@ #'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining(FALSE) #'to aggregate returns, default is TRUE #'@param \dots any other variable +#'@author Pulkit Mehrotra #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -29,7 +29,8 @@ #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset return #' @param type The type of distribution "gpd","pd","weibull" #' @param threshold The threshold beyond which the drawdowns have to be modelled -#' +#' +#'@author Pulkit Mehrotra #'@references #'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). Coppead Working Paper Series No. 359. #'Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -18,6 +18,7 @@ #'@param b final point #'@param minimum TRUE to calculate the minimum and FALSE to calculate the Maximum #'@param function_name The name of the function +#'@author Pulkit Mehrotra #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). #' #'@export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -39,7 +39,7 @@ #' @param R Returns #' @param confidence the confidence interval #' @param type The type of distribution "normal" or "ar"."ar" stands for Autoregressive. -#' +#' @author Pulkit Mehrotra #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). #' #' @examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -38,6 +38,7 @@ #'@param kr Kurtosis, in the same periodicity as the returns(non-annualized). #'To be given in case the return series is not given. #' +#'@author Pulkit Mehrotra #'@references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio #'Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter #' 2012/13 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -19,7 +19,7 @@ #' @param dp0 Bet at origin (initialization of AR(1)) #' @param bets Number of bets in the cumulative process #' @param confidence Confidence level for quantile -#' +#' @author Pulkit Mehrotra #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs #' and the ?Triple Penance? Rule(January 1, 2013). #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -22,6 +22,7 @@ #'@param MaxIter The Maximum number of iterations #'@param delta The value of delta Z #' +#'@author Pulkit Mehrotra #'@references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio #'Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter #'2012/13 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -30,7 +30,7 @@ #' @param kr Kurtosis, in the same periodicity as the returns(non-annualized). #' To be given in case the return series is not given. #' @param n track record length. To be given in case the return series is not given. -#' +#'@author Pulkit Mehrotra #' @references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio #' Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter #' 2012/13 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -29,6 +29,7 @@ #'@param geomtric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. #'@param ... any other variable #' +#'@author Pulkit Mehrotra #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -20,6 +20,7 @@ #'@param h Look back period #'@param geomtric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. #'@param ... any other variable +#'@author Pulkit Mehrotra #'@examples #'rollEconomicMax(edhec,0.08,100) #'@export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -33,6 +33,7 @@ #'@param ylim set the ylim value, as in \code{\link{plot}} #'@param xlim set the xlim value, as in \code{\link{plot}} #' +#'@author Pulkit Mehrotra #'@references #'Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: #'A Sharpe Ratio Indifference Curve Approach (January 2013). Algorithmic Finance, @@ -114,4 +115,4 @@ # # $Id: chart.SRIndifferenceCurve.R $ # -############################################################################### \ No newline at end of file +############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -26,7 +26,7 @@ #' @param R return series #' @param confidence the confidence interval #' @param type The type of distribution "normal" or "ar"."ar" stands for Autoregressive. -#' +#' @author Pulkit Mehrotra #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). #' #' @examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -28,6 +28,7 @@ #'@param ylim set the ylim value, as in \code{\link{plot}} #'@param xlim set the xlim value, as in \code{\link{plot}} #' +#'@author Pulkit Mehrotra #'@seealso \code{\link{plot}} #'@keywords ts multivariate distribution models hplot #'@examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -12,6 +12,7 @@ #'@param legend.loc set the legend.loc, as in \code{\link{plot}} #'@param colorset set the colorset label, as in \code{\link{plot}} #'@param \dots any other variable +#'@author Pulkit Mehrotra #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -19,6 +19,7 @@ #'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining(FALSE) #'to aggregate returns, default is TRUE #'@param \dots any other variable +#'@author Pulkit Mehrotra #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -2,8 +2,9 @@ #' #'@description #'A table to display the Probabilistic Sharpe Ratio Along with -#'the Minimum Track Record Length for better assessment of the returns. -#' +#'the Minimum Track Record Length for better assessment of the returns.For more +#'details about Probabilistic Sharpe Ratio and Minimum Track record length see\ +#'\code{ProbSharpeRatio} and \code{MinTrackRecord} respectively. #'@aliases table.PSR #' #'@param R the return series @@ -12,6 +13,7 @@ #'@param the confidence level #'@param weights the weights for the portfolio #' +#'@author Pulkit Mehrotra #'@references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio #'Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter #' 2012/13 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R 2013-08-19 13:08:10 UTC (rev 2826) @@ -2,11 +2,14 @@ #' Table for displaying the Mximum Drawdown and the Time under Water #' #' @description -#' \code{table.Penance} Displays the table showing mean,Standard Deviation , phi, sigma , MaxDD,time at which MaxDD occurs, MaxTuW and the penance. +#' \code{table.Penance} Displays the table showing mean,Standard Deviation , phi, sigma , MaxDD,time at which MaxDD occurs, MaxTuW and the penance.For more +#' details about MaxDD , Time under Water see code \code{MaxDD} and \code{TuW} +#' respoectively. #' #' @param R Returns #' @param confidence the confidence interval #' +#' @author Pulkit Mehrotra #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). #' @export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd 2013-08-19 13:08:10 UTC (rev 2826) @@ -48,11 +48,12 @@ their CAPM predictions } \examples{ -AlphaDrawdown(edhec[,1],edhec[,2]) ## expected value : 0.5141929 +data(edhec) +AlphaDrawdown(edhec[,1],edhec[,2]) -AlphaDrawdown(edhec[,1],edhec[,2],type="max") ## expected value : 0.8983177 +AlphaDrawdown(edhec[,1],edhec[,2],type="max") # expected value : 0.8983177 -AlphaDrawdown(edhec[,1],edhec[,2],type="average") ## expected value : 1.692592 +AlphaDrawdown(edhec[,1],edhec[,2],type="average") # expected value : 1.692592 } \references{ Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw 2013-08-19 13:08:10 UTC (rev 2826) @@ -42,25 +42,25 @@ <>= -source("redd.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/redd.R") @ <>= -source("edd.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/Edd.R") @ <>= -source("REM.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/REM.R") @ <>= -source("REDDCOPS.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R") @ <>= -source("EDDCOPS.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R") @ \section{ Rolling Economic Max } Rolling Economic Max at time t, looking back at portfolio Wealth history Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw 2013-08-19 13:08:10 UTC (rev 2826) @@ -4,6 +4,7 @@ \IfFileExists{url.sty}{\usepackage{url}} {\newcommand{\url}{\texttt}} +\usepackage[utf8]{inputenc} \usepackage{babel} \usepackage{Rd} @@ -32,25 +33,24 @@ <>= library(PerformanceAnalytics) -library(ggplot2) data(edhec) @ <>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/week2/code/BenchmarkSR.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R") @ <>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/week2/code/SRIndifferenceCurve.R") +source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R") @ \section{Benchmark Sharpe Ratio} - The performance of an Equal Volatility Weights benchmark ($SR_B$) is fully characterized in terms of: + The performance of an Equal Volatility Weights benchmark (\eqn{SR_B}) is fully characterized in terms of: 1. Number of approved strategies (S). 2. Average SR among strategies (SR). -3. Average off-diagonal correlations among strategies($\bar{\rho}$)). +3. Average off-diagonal correlations among strategies\eqn{\bar{\rho}}. The benchmark SR is a linear function of the average SR of the individual strategies, and a decreasing convex function of the number of strategies and the average pairwise correlation. @@ -59,7 +59,7 @@ \deqn{SR_B = \bar{SR}\sqrt{\frac{S}{1+(S-1)\bar{\rho}}}} <<>>= -BenchmanrkSR(edhec) +BenchmarkSR(edhec) @ \section{Sharpe Ratio Indifference Curve} @@ -76,7 +76,7 @@ \deqn{\bar{\rho}{_{s+1}}=\frac{1}{2}\biggl[\frac{\bar{({SR}.S+SR_{s+1}})^2}{S.SR_B^2}-\frac{S+1}{S}-\bar{\rho}{S-1}\biggr]} <>= -SRIndifference(edhec) +chart.SRIndifference(edhec) @ \end{document} \ No newline at end of file Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw 2013-08-19 09:18:25 UTC (rev 2825) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw 2013-08-19 13:08:10 UTC (rev 2826) @@ -37,20 +37,20 @@ @ <>= -source("../code/MaxDD.R") +source("../R/MaxDD.R") @ <>= -source("../code/TriplePenance.R") +source("../R/TriplePenance.R") @ <>= -source("../code/GoldenSection.R") +source("../R/GoldenSection.R") @ <>= -source("../code/TuW.R") +source("../R/TuW.R") @ \section{ Maximum Drawdown } Maximum Drawdown tells us Up to how much could a particular strategy lose with a given confidence level ?. This function calculated Maximum Drawdown for two underlying processes normal and autoregressive. For a normal process Maximum Drawdown is given by the formula @@ -61,6 +61,7 @@ The time at which the Maximum Drawdown occurs is given by + \deqn{t^\ast=\biggl(\frac{Z_{\alpha}\sigma}{2\mu}\biggr)^2} Here $Z_{\alpha}$ is the critical value of the Standard Normal Distribution associated with a probability $\alpha$.$\sigma$ and $\mu$ are the Standard Distribution and the mean respectively. From noreply at r-forge.r-project.org Mon Aug 19 16:44:14 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 16:44:14 +0200 (CEST) Subject: [Returnanalytics-commits] r2827 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man Message-ID: <20130819144414.89515185A85@r-forge.r-project.org> Author: pulkit Date: 2013-08-19 16:44:14 +0200 (Mon, 19 Aug 2013) New Revision: 2827 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/BenchmarkSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.SRIndifference.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/golden_section.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/rollDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/table.PSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/table.Penance.Rd Log: added see also Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-19 14:44:14 UTC (rev 2827) @@ -7,7 +7,8 @@ Contributors: Peter Carl, Brian G. Peterson Depends: xts, - PerformanceAnalytics + PerformanceAnalytics, + lpSolve Suggests: PortfolioAnalytics, lpSolve Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -31,6 +31,7 @@ #'@param xlim set the xlim value, as in \code{\link{plot}} #' #'@author Pulkit Mehrotra +#'@seealso \code{\link{BenchmarkSR}} \code{\link{chart.SRIndifference}} \code{\link{plot}} #'@references #'Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: #'A Sharpe Ratio Indifference Curve Approach (January 2013). Algorithmic Finance, Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -30,6 +30,9 @@ #'@param \dots any other passthru parameters #' #'@author Pulkit Mehrotra +#' @seealso \code{\link{ES}} \code{\link{maxDrawdown}} \code{\link{CDaR}} +#'\code{\link{AlphaDrawdown}} \code{\link{MultiBetaDrawdown}} \code{\link{BetaDrawdown}} + #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) #' with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, September 2012 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CdaR.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -17,7 +17,8 @@ #' @param p confidence level for calculation, default p=0.95 #' @param \dots any other passthru parameters #' @author Brian G. Peterson -#' @seealso \code{\link{ES}} \code{\link{maxDrawdown}} +#' @seealso \code{\link{ES}} \code{\link{maxDrawdown}} \code{\link{CdarMultiPath}} +#'\code{\link{AlphaDrawdown}} \code{\link{MultiBetaDrawdown}} \code{\link{BetaDrawdown}} #' @references Chekhlov, A., Uryasev, S., and M. Zabarankin. Portfolio #' Optimization With Drawdown Constraints. B. Scherer (Ed.) Asset and Liability #' Management Tools, Risk Books, London, 2003 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -4,20 +4,21 @@ #'@description #'The drawdown beta is formulated as follows #' -#'\deqn{\beta_DD = \frac{{\sum_{t=1}^T}{q_t^\asterisk}{(w_{k^{\asterisk}(t)}-w_t)}}{D_{\alpha}(w^M)}} +#'\deqn{\beta_DD = \frac{{\sum_{t=1}^T}{q_t^\**}{(w_{k^{\**}(t)}-w_t)}}{D_{\alpha}(w^M)}} #' here \eqn{\beta_DD} is the drawdown beta of the instrument. -#'\eqn{k^{\asterisk}(t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_k^M} +#'\eqn{k^{\**}(t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_k^M} #' -#'\eqn{q_t^\asterisk=1/((1-\alpha)T)} if \eqn{d_t^M} is one of the +#'\eqn{q_t^\**=1/((1-\alpha)T)} if \eqn{d_t^M} is one of the #'\eqn{(1-\alpha)T} largest drawdowns \eqn{d_1^{M} ,......d_t^M} of the -#'optimal portfolio and \eqn{q_t^\asterisk = 0} otherwise. It is assumed -#'that \eqn{D_\alpha(w^M) {\neq} 0} and that \eqn{q_t^\asterisk} and -#'\eqn{k^{\asterisk}(t) are uniquely determined for all \eqn{t = 1....T} +#'optimal portfolio and \eqn{q_t^\** = 0} otherwise. It is assumed +#'that \eqn{D_\alpha(w^M) {\neq} 0} and that \eqn{q_t^\**} and +#'\eqn{k^{\**}(t)} are uniquely determined for all \eqn{t = 1....T} #' #'The numerator in \eqn{\beta_DD} is the average rate of return of the #'instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest -#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{\asterisk}(t)} -#'is the cumulative rate of return of the instrument from the optimal portfolio#' peak time \eqn{k^\asterisk(t)} to time t. +#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{\**}(t)} +#'is the cumulative rate of return of the instrument from the optimal portfolio +#' peak time \eqn{k^\**(t)} to time t. #' #'The difference in CDaR and standard betas can be explained by the #'conceptual difference in beta definitions: the standard beta accounts for @@ -25,24 +26,34 @@ #'when the market goes up, while CDaR betas focus only on market drawdowns #'and, thus, are not affected when the market performs well. #' -#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns -#'@param Rm Return series of the optimal portfolio an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#'asset returns +#'@param Rm Return series of the optimal portfolio an xts, vector, matrix, +#'data frame, timeSeries or zoo object of asset returns #'@param p confidence level for calculation ,default(p=0.95) #'@param weights portfolio weighting vector, default NULL, see Details -#' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE -#' @param type The type of BetaDrawdown if specified alpha then the alpha value given is taken (default 0.95). If "average" then -#' alpha = 0 and if "max" then alpha = 1 is taken. +#'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic +#'chaining (FALSE) to aggregate returns, default TRUE +#'@param type The type of BetaDrawdown if specified alpha then the alpha +#'value given is taken (default 0.95). If "average" then alpha = 0 and if +#'"max" then alpha = 1 is taken. #'@param \dots any passthru variable. #' #'@author Pulkit Mehrotra +#' @seealso \code{\link{ES}} \code{\link{maxDrawdown}} \code{\link{CdarMultiPath}} +#'\code{\link{AlphaDrawdown}} \code{\link{MultiBetaDrawdown}} \code{\link{CDaR}} + +#' #'@references +#' #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model #'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University #'of Florida,September 2012. #' #'@examples #'BetaDrawdown(edhec[,1],edhec[,2]) - +#' +#' BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ # DESCRIPTION: Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -30,6 +30,9 @@ #'@param \dots any passthru variable. #' #'@author Pulkit Mehrotra +#' @seealso \code{\link{ES}} \code{\link{maxDrawdown}} \code{\link{CdarMultiPath}} +#'\code{\link{AlphaDrawdown}} \code{\link{CDaR}} \code{\link{BetaDrawdown}} + #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model #'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -26,6 +26,9 @@ #'@param \dots any passthru variable #' #'@author Pulkit Mehrotra +#' @seealso \code{\link{ES}} \code{\link{maxDrawdown}} \code{\link{CdarMultiPath}} +#'\code{\link{CDaR}} \code{\link{MultiBetaDrawdown}} \code{\link{BetaDrawdown}} + #'@references #'Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model #'(CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -19,6 +19,9 @@ #'@param h Look back period #'@param geomtric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. #'@param ... any other variable +#'@author Pulkit Mehrotra +#'@seealso \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} +#'\code{\link{rollDrawdown}} \code{\link{REDDCOPS}} \code{\link{rollEconomicMax}} #' #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -19,6 +19,8 @@ #'to aggregate returns, default is TRUE #'@param \dots any other variable #'@author Pulkit Mehrotra +#'@seealso \code{\link{chart.REDD}} \code{\link{EDDCOPS}} +#'\code{\link{rollDrawdown}} \code{\link{REDDCOPS}} \code{\link{rollEconomicMax}} #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -1,7 +1,7 @@ #'@title #'Modelling Drawdown using Extreme Value Theory #' -#"@description +#'@description #'It has been shown empirically that Drawdowns can be modelled using Modified Generalized Pareto #'distribution(MGPD), Generalized Pareto Distribution(GPD) and other particular cases of MGPD such #'as weibull distribution \eqn{MGPD(\gamma,0,\psi)} and unit exponential distribution\eqn{MGPD(1,0,\psi)} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -40,6 +40,7 @@ #' @param confidence the confidence interval #' @param type The type of distribution "normal" or "ar"."ar" stands for Autoregressive. #' @author Pulkit Mehrotra +#' @seealso \code{\link{chart.Penance}} \code{\link{table.Penance}} \code{\link{TuW}} #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). #' #' @examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -39,6 +39,7 @@ #'To be given in case the return series is not given. #' #'@author Pulkit Mehrotra +#'@seealso \code{\link{ProbSharpeRatio}} \code{\link{PsrPortfolio}} \code{\link{table.PSR}} #'@references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio #'Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter #' 2012/13 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -23,6 +23,7 @@ #'@param delta The value of delta Z #' #'@author Pulkit Mehrotra +#'@seealso \code{\link{ProbSharpeRatio}} \code{\link{table.PSR}} \code{\link{MinTrackRecord}} #'@references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio #'Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter #'2012/13 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -31,6 +31,7 @@ #' To be given in case the return series is not given. #' @param n track record length. To be given in case the return series is not given. #'@author Pulkit Mehrotra +#'@seealso \code{\link{PsrPortfolio}} \code{\link{table.PSR}} \code{\link{MinTrackRecord}} #' @references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio #' Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter #' 2012/13 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -30,6 +30,8 @@ #'@param ... any other variable #' #'@author Pulkit Mehrotra +#'@seealso \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} +#'\code{\link{rollDrawdown}} \code{\link{EDDCOPS}} \code{\link{rollEconomicMax}} #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -21,6 +21,8 @@ #'@param geomtric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. #'@param ... any other variable #'@author Pulkit Mehrotra +#'@seealso \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} +#'\code{\link{rollDrawdown}} \code{\link{REDDCOPS}} \code{\link{EDDCOPS}} #'@examples #'rollEconomicMax(edhec,0.08,100) #'@export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -39,7 +39,7 @@ #'A Sharpe Ratio Indifference Curve Approach (January 2013). Algorithmic Finance, #'Vol. 2, No. 1 (2013). #' -#'@seealso \code{\link{plot}} +#'@seealso \code{\link{BenchmarkSR}} \code{\link{chart.BenchmarkSR}} \code{\link{plot}} #'@keywords ts multivariate distribution models hplot #'@examples #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -27,6 +27,7 @@ #' @param confidence the confidence interval #' @param type The type of distribution "normal" or "ar"."ar" stands for Autoregressive. #' @author Pulkit Mehrotra +#' @seealso \code{\link{chart.Penance}} \code{\link{MaxDD}} \code{\link{table.Penance}} #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). #' #' @examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -1,5 +1,8 @@ #'@title #'Penance vs phi plot +#' +#'@description +#' #'A plot for Penance vs phi for the given portfolio #'The relationship between penance and phi is given by #' @@ -29,7 +32,7 @@ #'@param xlim set the xlim value, as in \code{\link{plot}} #' #'@author Pulkit Mehrotra -#'@seealso \code{\link{plot}} +#'@seealso \code{\link{plot}} \code{\link{table.Penance}} \code{\link{MaxDD}} \code{\link{TuW}} #'@keywords ts multivariate distribution models hplot #'@examples #'chart.Penance(edhec,0.95) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -13,6 +13,8 @@ #'@param colorset set the colorset label, as in \code{\link{plot}} #'@param \dots any other variable #'@author Pulkit Mehrotra +#'@seealso \code{\link{plot}} \code{\link{EconomicDrawdown}} \code{\link{EDDCOPS}} +#'\code{\link{rollDrawdown}} \code{\link{REDDCOPS}} \code{\link{rollEconomicMax}} #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -20,6 +20,8 @@ #'to aggregate returns, default is TRUE #'@param \dots any other variable #'@author Pulkit Mehrotra +#'@seealso \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} +#'\code{\link{EDDCOPS}} \code{\link{REDDCOPS}} \code{\link{rollEconomicMax}} #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -14,6 +14,7 @@ #'@param weights the weights for the portfolio #' #'@author Pulkit Mehrotra +#'@seealso \code{\link{ProbSharpeRatio}} \code{\link{PsrPortfolio}} \code{\link{MinTrackRecord}} #'@references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio #'Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter #' 2012/13 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R 2013-08-19 14:44:14 UTC (rev 2827) @@ -10,6 +10,7 @@ #' @param confidence the confidence interval #' #' @author Pulkit Mehrotra +#' @seealso \code{\link{chart.Penance}} \code{\link{MaxDD}} \code{\link{TuW}} #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). #' @export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/AlphaDrawdown.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -55,10 +55,19 @@ AlphaDrawdown(edhec[,1],edhec[,2],type="average") # expected value : 1.692592 } +\author{ + Pulkit Mehrotra +} \references{ Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida,September 2012. } +\seealso{ + \code{\link{ES}} \code{\link{maxDrawdown}} + \code{\link{CdarMultiPath}} \code{\link{CDaR}} + \code{\link{MultiBetaDrawdown}} + \code{\link{BetaDrawdown}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/BenchmarkSR.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/BenchmarkSR.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/BenchmarkSR.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -25,6 +25,9 @@ data(edhec) BenchmarkSR(edhec) #expected 0.393797 } +\author{ + Pulkit Mehrotra +} \references{ Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: A Sharpe Ratio Indifference Curve Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -35,26 +35,24 @@ The drawdown beta is formulated as follows \deqn{\beta_DD = - \frac{{\sum_{t=1}^T}{q_t^\asterisk}{(w_{k^{\asterisk}(t)}-w_t)}}{D_{\alpha}(w^M)}} + \frac{{\sum_{t=1}^T}{q_t^\**}{(w_{k^{\**}(t)}-w_t)}}{D_{\alpha}(w^M)}} here \eqn{\beta_DD} is the drawdown beta of the instrument. - \eqn{k^{\asterisk}(t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_k^M} + \eqn{k^{\**}(t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_k^M} - \eqn{q_t^\asterisk=1/((1-\alpha)T)} if \eqn{d_t^M} is one - of the \eqn{(1-\alpha)T} largest drawdowns \eqn{d_1^{M} - ,......d_t^M} of the optimal portfolio and - \eqn{q_t^\asterisk = 0} otherwise. It is assumed that - \eqn{D_\alpha(w^M) {\neq} 0} and that \eqn{q_t^\asterisk} - and \eqn{k^{\asterisk}(t) are uniquely determined for all - \eqn{t = 1....T} + \eqn{q_t^\**=1/((1-\alpha)T)} if \eqn{d_t^M} is one of + the \eqn{(1-\alpha)T} largest drawdowns \eqn{d_1^{M} + ,......d_t^M} of the optimal portfolio and \eqn{q_t^\** = + 0} otherwise. It is assumed that \eqn{D_\alpha(w^M) + {\neq} 0} and that \eqn{q_t^\**} and \eqn{k^{\**}(t)} are + uniquely determined for all \eqn{t = 1....T} The numerator in \eqn{\beta_DD} is the average rate of return of the instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest drawdowns of the optimal - portfolio, where \eqn{w_t - w_k^{\asterisk}(t)} is the + portfolio, where \eqn{w_t - w_k^{\**}(t)} is the cumulative rate of return of the instrument from the - optimal portfolio#' peak time \eqn{k^\asterisk(t)} to - time t. + optimal portfolio peak time \eqn{k^\**(t)} to time t. The difference in CDaR and standard betas can be explained by the conceptual difference in beta @@ -67,10 +65,18 @@ \examples{ BetaDrawdown(edhec[,1],edhec[,2]) } +\author{ + Pulkit Mehrotra +} \references{ Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida,September 2012. } +\seealso{ + \code{\link{ES}} \code{\link{maxDrawdown}} + \code{\link{CdarMultiPath}} \code{\link{AlphaDrawdown}} + \code{\link{MultiBetaDrawdown}} \code{\link{CDaR}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -54,10 +54,19 @@ \eqn{\alpha} = 0, \eqn{D_\alpha(w)} is the average drawdown } +\author{ + Pulkit Mehrotra +} \references{ Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida, September 2012 } +\seealso{ + \code{\link{ES}} \code{\link{maxDrawdown}} + \code{\link{CDaR}} \code{\link{AlphaDrawdown}} + \code{\link{MultiBetaDrawdown}} + \code{\link{BetaDrawdown}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -45,9 +45,17 @@ data(edhec) EDDCOPS(edhec,delta = 0.1,gamma = 0.7,Rf = 0) } +\author{ + Pulkit Mehrotra +} \references{ Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) } +\seealso{ + \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} + \code{\link{rollDrawdown}} \code{\link{REDDCOPS}} + \code{\link{rollEconomicMax}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -34,9 +34,17 @@ \examples{ EconomicDrawdown(edhec,0.08,100) } +\author{ + Pulkit Mehrotra +} \references{ Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) } +\seealso{ + \code{\link{chart.REDD}} \code{\link{EDDCOPS}} + \code{\link{rollDrawdown}} \code{\link{REDDCOPS}} + \code{\link{rollEconomicMax}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -65,9 +65,16 @@ MaxDD(edhec,0.95,"ar") MaxDD(edhec[,1],0.95,"normal") #expected values 4.241799 6.618966 } +\author{ + Pulkit Mehrotra +} \references{ Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). } +\seealso{ + \code{\link{chart.Penance}} \code{\link{table.Penance}} + \code{\link{TuW}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -68,11 +68,18 @@ MinTrackRecord(refSR = 1/12^0.5,Rf = 0,p=0.95,sr = 2/12^0.5,sk=-0.72,kr=5.78) MinTrackRecord(edhec[,1:2],refSR = c(0.28,0.24)) } +\author{ + Pulkit Mehrotra +} \references{ Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter 2012/13 } +\seealso{ + \code{\link{ProbSharpeRatio}} \code{\link{PsrPortfolio}} + \code{\link{table.PSR}} +} \keyword{distribution} \keyword{models} \keyword{multivariate} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -38,6 +38,9 @@ \examples{ MonteSimulTriplePenance(10^6,0.5,1,2,1,25,0.95) # Expected Value Quantile (Exact) = 6.781592 } +\author{ + Pulkit Mehrotra +} \references{ Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -60,10 +60,18 @@ MultiBetaDrawdown(cbind(edhec,edhec),cbind(edhec[,2],edhec[,2]),sample = 2,ps=c(0.4,0.6)) BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 } +\author{ + Pulkit Mehrotra +} \references{ Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) with Drawdown Measure.Research Report 2012-9, ISE Dept., University of Florida,September 2012. } +\seealso{ + \code{\link{ES}} \code{\link{maxDrawdown}} + \code{\link{CdarMultiPath}} \code{\link{AlphaDrawdown}} + \code{\link{CDaR}} \code{\link{BetaDrawdown}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -60,11 +60,18 @@ ProbSharpeRatio(refSR = 1/12^0.5,Rf = 0,p=0.95,sr = 2/12^0.5,sk=-0.72,kr=5.78,n=59) ProbSharpeRatio(edhec[,1:2],refSR = c(0.28,0.24)) } +\author{ + Pulkit Mehrotra +} \references{ Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter 2012/13 } +\seealso{ + \code{\link{PsrPortfolio}} \code{\link{table.PSR}} + \code{\link{MinTrackRecord}} +} \keyword{distribution} \keyword{models} \keyword{multivariate} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -45,11 +45,18 @@ data(edhec) PsrPortfolio(edhec) } +\author{ + Pulkit Mehrotra +} \references{ Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio Efficient Frontier} (July 1, 2012). Journal of Risk, Vol. 15, No. 2, Winter 2012/13 } +\seealso{ + \code{\link{ProbSharpeRatio}} \code{\link{table.PSR}} + \code{\link{MinTrackRecord}} +} \keyword{distribution} \keyword{models} \keyword{multivariate} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -71,9 +71,17 @@ data(managers) REDDCOPS(managers[,1],0.80, Rf = managers[,10,drop=FALSE],12,asset="one") } +\author{ + Pulkit Mehrotra +} \references{ Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) } +\seealso{ + \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} + \code{\link{rollDrawdown}} \code{\link{EDDCOPS}} + \code{\link{rollEconomicMax}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd 2013-08-19 13:08:10 UTC (rev 2826) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd 2013-08-19 14:44:14 UTC (rev 2827) @@ -47,9 +47,16 @@ TuW(edhec,0.95,"ar") TuW(edhec[,1],0.95,"normal") # expected value 103.2573 } +\author{ + Pulkit Mehrotra +} \references{ Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). } +\seealso{ + \code{\link{chart.Penance}} \code{\link{MaxDD}} + \code{\link{table.Penance}} +} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd 2013-08-19 13:08:10 UTC (rev 2826) [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2827 From noreply at r-forge.r-project.org Mon Aug 19 17:25:13 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 17:25:13 +0200 (CEST) Subject: [Returnanalytics-commits] r2828 - pkg/Meucci/demo Message-ID: <20130819152513.E15AF185AB3@r-forge.r-project.org> Author: xavierv Date: 2013-08-19 17:25:13 +0200 (Mon, 19 Aug 2013) New Revision: 2828 Added: pkg/Meucci/demo/S_SelectionHeuristics.R Log: - added S_SelectionHeuristics demo script from chapter 3 Added: pkg/Meucci/demo/S_SelectionHeuristics.R =================================================================== --- pkg/Meucci/demo/S_SelectionHeuristics.R (rev 0) +++ pkg/Meucci/demo/S_SelectionHeuristics.R 2013-08-19 15:25:13 UTC (rev 2828) @@ -0,0 +1,269 @@ +#' Compute the r-square of selected factors, as described in A. Meucci "Risk and Asset Allocation", +#' Springer, 2005 +#' +#' @param Who : [vector] indices for selection +#' @param M : [struct] information +#' +#' @return g : [scalar] r-square for the selected factors +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "SelectGoodness.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +SelectGoodness = function( Who, M ) +{ + Cov_FF_k = M$Cov_FF[ Who, Who ]; + Cov_XF_k = M$Cov_XF[ , Who ]; + + # abriged version of variance of error + minCov_U = Cov_XF_k %*% (solve(Cov_FF_k) %*% matrix( Cov_XF_k ) ); + + # abridged version of r^2 + g = sum( diag( minCov_U ) ); + + return( g ); +} + +#' Naive approach for factor selection, as described in A. Meucci "Risk and Asset Allocation", Springer, 2005 +#' +#' @param OutOfWho : [vector] (N x 1) of selection indices +#' @param Metric : [struct] metric with information on covariance +#' +#' @return Who : [vector] (N x 1) indices +#' @return Num : [vector] (N x 1) rank of the selection +#' @return G : [vector] (N x 1) r-square (cumulative) +#' +#' @note sorted by ascending order +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "SelectNaive.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +SelectNaive = function( OutOfWho, Metric ) +{ + N = length(OutOfWho); + + a = matrix( 0, 1, N ); + + for( n in 1 : N ) + { + a[ n ] = SelectGoodness( OutOfWho[ n ], Metric ); + } + + Who = order( -a ); + + G = matrix( NaN, N, 1); + + for( n in 1 : N ) + { + G[ n ] = SelectGoodness( Who[ 1:n ], Metric ); + } + + Num = 1 : N ; + + return( list( Who = Who, Num = Num, G = G ) ) +} + + +#' Recursive acceptance routine for factor selection, as described in A. Meucci "Risk and Asset Allocation", Springer, 2005 +#' +#' @param OutOfWho : [vector] (N x 1) of selection indices +#' @param AcceptBy : [scalar] number of factors to accept at each iteration +#' @param Metric : [struct] metric with information on covariance +#' +#' @return Who : [vector] (N x 1) indices +#' @return Num : [vector] (N x 1) rank of the selection +#' @return G : [vector] (N x 1) r-square (cumulative) +#' +#' @note same than recursive rejection, but it starts from the empty set, instead of from the full set +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "SelectAcceptByS.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +SelectAcceptByS = function( OutOfWho, AcceptBy, Metric ) +{ + N = length(OutOfWho); + + Who = NULL; + Num = NULL; + G = NULL; + while( length(Who) < N ) + { + Candidates = setdiff( OutOfWho, Who ); + + if( length( Candidates ) != 1 ) + { + Combos = t( combn( c(Candidates), AcceptBy ) ); + } + else + { + Combos = matrix(nchoosek(c(Candidates), AcceptBy )); + } + + L = dim( Combos )[1]; + a = matrix( 0, 1, L ); + for( l in 1 : L ) + { + a[ l ] = SelectGoodness( cbind( Who, Combos[ l, ] ), Metric ); + } + g = max( a ); + Pick = which.max(a); + Who = cbind( Who, Combos[ Pick, ] ); + G = cbind( G, g ); + Num = cbind( Num, length(Who) ); + } + + return( list( Who = Who, Num = Num, G = G ) ); +} + +#' Recursive rejection routine for factor selection, as described in A. Meucci "Risk and Asset Allocation", Springer, 2005 +#' +#' @param OutOfWho : [vector] (N x 1) of selection indices +#' @param RejecttBy : [scalar] number of factors to accept at each iteration +#' @param Metric : [struct] metric with information on covariance +#' +#' @return Who : [vector] (N x 1) indices +#' @return Num : [vector] (N x 1) rank of the selection +#' @return G : [vector] (N x 1) r-square (cumulative) +#' +#' @note the recursive rejection routine in Meucci (2005, section 3.4.5) to solve heuristically the above +#' problem by eliminating the factors one at a time starting from the full set +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "SelectRejectByS.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +SelectRejectByS = function(OutOfWho, RejectBy, Metric) +{ + Who = OutOfWho; + Num = length( Who ); + G = SelectGoodness( Who, Metric ); + + while( length(Who) > 1 ) + { + + Drop = t( combn( Who, RejectBy ) ); + + L = dim( Drop )[ 1 ]; + a = matrix( 0, 1, L ); + for( l in 1 : L ) + { + a[ l ] = SelectGoodness( setdiff( Who, Drop[ l, ] ), Metric ); + } + g = max(a); + Pick = which.max( a ); + Who = setdiff( Who, Drop[ Pick, ] ); + G = cbind( G, g ); + Num = cbind( Num, length(Who) ); + } + + return( list( Who = Who, Num = Num, G = G ) ); +} + + + +#' Exact approach for factor selection, as described in A. Meucci "Risk and Asset Allocation", Springer, 2005 +#' +#' @param OutOfWho : [vector] (N x 1) of selection indices +#' @param Metric : [struct] metric with information on covariance +#' +#' @return Who : [vector] (N x 1) indices +#' @return Num : [vector] (N x 1) rank of the selection +#' @return G : [vector] (N x 1) r-square (cumulative) +#' +#' @note o iterate over the full set of factor combination +#' o !!! extremely time consuming !!! +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "SelectRejectByS.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +SelectExactNChooseK = function( OutOfWho, K, M ) +{ + Combos = t(combn( OutOfWho[ i ], K ) ); + L = dim(Combos)[1]; + a = matrix( 0, 1, L ); + + for( l in 1 : L ) + { + a[ l ] = SelectGoodness( Combos[ l, ] , M ); + } + + g = max(a) + Pick = which.max( a ); + Who = Combos[ Pick, ]; + + return( list( Who = Who, g = g ) ); +} + +#' This script selects the best K out of N factors in the Factors on Demand apporach to attribution +#' as described in A. Meucci, "Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_SelectionHeuristics.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' + + +################################################################################################################## +### Inputs +N = 50; +A = randn(N + 1, N + 1); +Sig = A %*% t(A); + +Metric = list( Cov_FF = Sig[ 1:N, 1:N ], Cov_XF = matrix( Sig[ N+1, 1:N ], 1, )); +OutOfWho = 1:N; + +################################################################################################################## +### Naive routine for factor selection +SN = SelectNaive( OutOfWho, Metric ); + +################################################################################################################## +### Acceptance routine for factor selection +AcceptBy = 1; +SAB = SelectAcceptByS( OutOfWho, AcceptBy, Metric ); + +################################################################################################################## +### Rejection routine for factor selection +RejectBy = 1; +SRB = SelectRejectByS( OutOfWho, RejectBy, Metric ); + +################################################################################################################## +### Plots +dev.new(); +h1 = plot( SN$Num, SN$G, col = "black", type = "l", xlab = paste( "num players out of total", N ), ylab = "fit" ); +h2 = lines( SAB$Num, SAB$G, col = "blue" ); +h3 = lines( SRB$Num, SRB$G, col = "red" ); +legend("bottomright", 1.9, c("naive", "rec. rejection", "rec. acceptance"), col = c( "black", "red", "blue" ), lty = 1, bg = "gray90" ) + +# exact routine +print("exact routine; be patient..."); + +nOutOfWho = length( OutOfWho ); +GE = NULL; +NumE = NULL; + +for( k in 1 : nOutOfWho ) +{ + print(k); + + SENC = SelectExactNChooseK( OutOfWho, k, Metric ); + GE = cbind( GE, SENC$G ); + NumE = cbind( NumE, k ); +} + +h4 = plot( NumE, GE, col = "red" ); + From noreply at r-forge.r-project.org Mon Aug 19 21:22:09 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 19 Aug 2013 21:22:09 +0200 (CEST) Subject: [Returnanalytics-commits] r2829 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130819192209.2B83F185307@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-19 21:22:08 +0200 (Mon, 19 Aug 2013) New Revision: 2829 Added: pkg/PortfolioAnalytics/man/var.portfolio.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/objective.R pkg/PortfolioAnalytics/R/optimize.portfolio.R pkg/PortfolioAnalytics/man/add.objective.Rd pkg/PortfolioAnalytics/man/optimize.portfolio.Rd pkg/PortfolioAnalytics/man/optimize.portfolio.rebalancing.Rd Log: updating documentation. Primarily adding details of the object returned by optimize.portfolio Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-19 15:25:13 UTC (rev 2828) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-19 19:22:08 UTC (rev 2829) @@ -1,6 +1,4 @@ export(add.constraint) -export(add.objective_v1) -export(add.objective_v2) export(add.objective) export(applyFUN) export(box_constraint) @@ -21,7 +19,6 @@ export(charts.ROI) export(charts.RP) export(constrained_group_tmp) -export(constrained_objective_v1) export(constrained_objective_v2) export(constrained_objective) export(constraint_ROI) @@ -30,6 +27,7 @@ export(diversification_constraint) export(diversification) export(extract.efficient.frontier) +export(extractObjectiveMeasures) export(extractStats.optimize.portfolio.DEoptim) export(extractStats.optimize.portfolio.GenSA) export(extractStats.optimize.portfolio.parallel) @@ -53,10 +51,8 @@ export(is.portfolio) export(minmax_objective) export(objective) -export(optimize.portfolio_v1) export(optimize.portfolio_v2) export(optimize.portfolio.parallel) -export(optimize.portfolio.rebalancing_v1) export(optimize.portfolio.rebalancing) export(optimize.portfolio) export(plot.optimize.portfolio.DEoptim) @@ -96,10 +92,12 @@ export(trailingFUN) export(turnover_constraint) export(turnover_objective) +export(turnover) export(txfrm_box_constraint) export(txfrm_group_constraint) export(txfrm_position_limit_constraint) export(txfrm_weight_sum_constraint) export(update_constraint_v1tov2) export(update.constraint) +export(var.portfolio) export(weight_sum_constraint) Modified: pkg/PortfolioAnalytics/R/objective.R =================================================================== --- pkg/PortfolioAnalytics/R/objective.R 2013-08-19 15:25:13 UTC (rev 2828) +++ pkg/PortfolioAnalytics/R/objective.R 2013-08-19 19:22:08 UTC (rev 2829) @@ -200,6 +200,7 @@ #' Objectives of type 'turnove' and 'minmax' are also supported. #' #' @param portfolio an object of type 'portfolio' to add the objective to, specifying the portfolio for the optimization, see \code{\link{portfolio}} +#' @param constraints a 'v1_constraint' object for backwards compatibility, see \code{\link{constraint}} #' @param type character type of the objective to add or update, currently 'return','risk', or 'risk_budget' #' @param name name of the objective, should correspond to a function, though we will try to make allowances #' @param arguments default arguments to be passed to an objective function when executed Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-19 15:25:13 UTC (rev 2828) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-19 19:22:08 UTC (rev 2829) @@ -911,8 +911,8 @@ #' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns #' @param portfolio an object of type "portfolio" specifying the constraints and objectives for the optimization -#' @param constraints default=NULL, a list of constraint objects. An object of class ]v1_constraint' can be passed in here. -#' @param objectives default=NULL, a list of objective objects +#' @param constraints default=NULL, a list of constraint objects. An object of class v1_constraint' can be passed in here. +#' @param objectives default=NULL, a list of objective objects. #' @param optimize_method one of "DEoptim", "random", "ROI","ROI_old", "pso", "GenSA". For using \code{ROI_old}, need to use a constraint_ROI object in constraints. For using \code{ROI}, pass standard \code{constratint} object in \code{constraints} argument. Presently, ROI has plugins for \code{quadprog} and \code{Rglpk}. #' @param search_size integer, how many portfolios to test, default 20,000 #' @param trace TRUE/FALSE if TRUE will attempt to return additional information on the path or portfolios searched @@ -921,10 +921,76 @@ #' @param momentFUN the name of a function to call to set portfolio moments, default \code{\link{set.portfolio.moments_v2}} #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' -#' @return a list containing the optimal weights, some summary statistics, the function call, and optionally trace information +#' @return a list containing the following elements +#' \itemize{ +#' \item{\code{weights}:}{ The optimal set weights.} +#' \item{\code{objective_measures}:}{ A list containing the value of each objective corresponding to the optimal weights.} +#' \item{\code{out}:}{ The output of the solver.} +#' \item{\code{call}:}{ The function call.} +#' \item{\code{portfolio}:}{ The portfolio object.} +#' \item{\code{R}:}{ The asset returns.} +#' \item{\code{data summary:}}{ The first row and last row of \code{R}.} +#' \item{\code{elapsed_time:}}{ The amount of time that elapses while the optimization is run.} +#' \item{\code{end_t:}}{ The date and time the optimization completed.} +#' } +#' When Trace=TRUE is specified, the following elements will be returned in +#' addition to the elements above. The output depends on the optimization +#' method and is specific to each solver. Refer to the documentation of the +#' desired solver for more information. #' +#' \code{optimize_method="random"} +#' \itemize{ +#' \item{\code{random_portfolios}:}{ A matrix of the random portfolios.} +#' \item{\code{random_portfolio_objective_results}:}{ A list of the following elements for each random portfolio.} +#' \itemize{ +#' \item{\code{out}:}{ The output value of the solver corresponding to the random portfolio weights.} +#' \item{\code{weights}:}{ The weights of the random portfolio.} +#' \item{\code{objective_measures}:}{ A list of each objective measure corresponding to the random portfolio weights.} +#' } +#' } +#' +#' \code{optimize_method="DEoptim"} +#' \itemize{ +#' \item{\code{DEoutput:}}{ A list (of length 2) containing the following elements, see \code{\link{DEoptim}}.} +#' \itemize{ +#' \item{\code{optim}} +#' \item{\code{member}} +#' } +#' \item{\code{DEoptim_objective_results}:}{ A list containing the following elements for each intermediate population.} +#' \itemize{ +#' \item{\code{out}:}{ The output of the solver.} +#' \item{\code{weights}:}{ Population weights.} +#' \item{\code{init_weights}:}{ Initial population weights.} +#' \item{\code{objective_measures}:}{ A list of each objective measure corresponding to the weights} +#' } +#' } +#' +#' \code{optimize_method="pso"} +#' \itemize{ +#' \item{\code{PSOoutput}:}{ A list containing the following elements, see \code{\link{psoptim}}:} +#' \itemize{ +#' \item{par} +#' \item{value} +#' \item{counts} +#' \item{convergence} +#' \item{message} +#' \item{stats} +#' } +#' } +#' +#' \code{optimize_method="GenSA"} +#' \itemize{ +#' \item{\code{GenSAoutput:}}{ A list containing the following elements, see \code{\link{GenSA}}:} +#' \itemize{ +#' \item{value} +#' \item{par} +#' \item{trace.mat} +#' \item{counts} +#' } +#' } +#' #' @author Kris Boudt, Peter Carl, Brian G. Peterson, Ross Bennett -#' @aliases optimize.portfolio_v2, optimize_portfolio_v1 +#' @aliases optimize.portfolio_v2 optimize_portfolio_v1 #' @seealso \code{\link{portfolio.spec}} #' @name optimize.portfolio #' @export Modified: pkg/PortfolioAnalytics/man/add.objective.Rd =================================================================== --- pkg/PortfolioAnalytics/man/add.objective.Rd 2013-08-19 15:25:13 UTC (rev 2828) +++ pkg/PortfolioAnalytics/man/add.objective.Rd 2013-08-19 19:22:08 UTC (rev 2829) @@ -7,17 +7,21 @@ add.objective_v1(constraints, type, name, arguments = NULL, enabled = TRUE, ..., indexnum = NULL) - add.objective_v2(portfolio, type, name, arguments = NULL, - enabled = TRUE, ..., indexnum = NULL) + add.objective_v2(portfolio, constraints = NULL, type, + name, arguments = NULL, enabled = TRUE, ..., + indexnum = NULL) - add.objective(portfolio, type, name, arguments = NULL, - enabled = TRUE, ..., indexnum = NULL) + add.objective(portfolio, constraints = NULL, type, name, + arguments = NULL, enabled = TRUE, ..., indexnum = NULL) } \arguments{ \item{portfolio}{an object of type 'portfolio' to add the objective to, specifying the portfolio for the optimization, see \code{\link{portfolio}}} + \item{constraints}{a 'v1_constraint' object for backwards + compatibility, see \code{\link{constraint}}} + \item{type}{character type of the objective to add or update, currently 'return','risk', or 'risk_budget'} Modified: pkg/PortfolioAnalytics/man/optimize.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/optimize.portfolio.Rd 2013-08-19 15:25:13 UTC (rev 2828) +++ pkg/PortfolioAnalytics/man/optimize.portfolio.Rd 2013-08-19 19:22:08 UTC (rev 2829) @@ -1,8 +1,7 @@ \name{optimize.portfolio} +\alias{optimize_portfolio_v1} \alias{optimize.portfolio} -\alias{optimize_portfolio_v1} \alias{optimize.portfolio_v2} -\alias{optimize.portfolio_v2,} \title{constrained optimization of portfolios} \usage{ optimize.portfolio_v1(R, constraints, @@ -10,14 +9,14 @@ search_size = 20000, trace = FALSE, ..., rp = NULL, momentFUN = "set.portfolio.moments_v1") - optimize.portfolio_v2(R, portfolio, constraints = NULL, - objectives = NULL, + optimize.portfolio_v2(R, portfolio = NULL, + constraints = NULL, objectives = NULL, optimize_method = c("DEoptim", "random", "ROI", "ROI_old", "pso", "GenSA"), search_size = 20000, trace = FALSE, ..., rp = NULL, momentFUN = "set.portfolio.moments", message = FALSE) - optimize.portfolio(R, portfolio, constraints = NULL, - objectives = NULL, + optimize.portfolio(R, portfolio = NULL, + constraints = NULL, objectives = NULL, optimize_method = c("DEoptim", "random", "ROI", "ROI_old", "pso", "GenSA"), search_size = 20000, trace = FALSE, ..., rp = NULL, momentFUN = "set.portfolio.moments", message = FALSE) @@ -30,11 +29,11 @@ the constraints and objectives for the optimization} \item{constraints}{default=NULL, a list of constraint - objects. An object of class ]v1_constraint' can be passed + objects. An object of class v1_constraint' can be passed in here.} \item{objectives}{default=NULL, a list of objective - objects} + objects.} \item{optimize_method}{one of "DEoptim", "random", "ROI","ROI_old", "pso", "GenSA". For using @@ -65,9 +64,59 @@ Display messages if TRUE.} } \value{ - a list containing the optimal weights, some summary - statistics, the function call, and optionally trace - information + a list containing the following elements \itemize{ + \item{\code{weights}:}{ The optimal set weights.} + \item{\code{objective_measures}:}{ A list containing the + value of each objective corresponding to the optimal + weights.} \item{\code{out}:}{ The output of the solver.} + \item{\code{call}:}{ The function call.} + \item{\code{portfolio}:}{ The portfolio object.} + \item{\code{R}:}{ The asset returns.} \item{\code{data + summary:}}{ The first row and last row of \code{R}.} + \item{\code{elapsed_time:}}{ The amount of time that + elapses while the optimization is run.} + \item{\code{end_t:}}{ The date and time the optimization + completed.} } When Trace=TRUE is specified, the following + elements will be returned in addition to the elements + above. The output depends on the optimization method and + is specific to each solver. Refer to the documentation of + the desired solver for more information. + + \code{optimize_method="random"} \itemize{ + \item{\code{random_portfolios}:}{ A matrix of the random + portfolios.} + \item{\code{random_portfolio_objective_results}:}{ A list + of the following elements for each random portfolio.} + \itemize{ \item{\code{out}:}{ The output value of the + solver corresponding to the random portfolio weights.} + \item{\code{weights}:}{ The weights of the random + portfolio.} \item{\code{objective_measures}:}{ A list of + each objective measure corresponding to the random + portfolio weights.} } } + + \code{optimize_method="DEoptim"} \itemize{ + \item{\code{DEoutput:}}{ A list (of length 2) containing + the following elements, see \code{\link{DEoptim}}.} + \itemize{ \item{\code{optim}} \item{\code{member}} } + \item{\code{DEoptim_objective_results}:}{ A list + containing the following elements for each intermediate + population.} \itemize{ \item{\code{out}:}{ The output of + the solver.} \item{\code{weights}:}{ Population weights.} + \item{\code{init_weights}:}{ Initial population weights.} + \item{\code{objective_measures}:}{ A list of each + objective measure corresponding to the weights} } } + + \code{optimize_method="pso"} \itemize{ + \item{\code{PSOoutput}:}{ A list containing the following + elements, see \code{\link{psoptim}}:} \itemize{ + \item{par} \item{value} \item{counts} \item{convergence} + \item{message} \item{stats} } } + + \code{optimize_method="GenSA"} \itemize{ + \item{\code{GenSAoutput:}}{ A list containing the + following elements, see \code{\link{GenSA}}:} \itemize{ + \item{value} \item{par} \item{trace.mat} \item{counts} } + } } \description{ This function aims to provide a wrapper for constrained Modified: pkg/PortfolioAnalytics/man/optimize.portfolio.rebalancing.Rd =================================================================== --- pkg/PortfolioAnalytics/man/optimize.portfolio.rebalancing.Rd 2013-08-19 15:25:13 UTC (rev 2828) +++ pkg/PortfolioAnalytics/man/optimize.portfolio.rebalancing.Rd 2013-08-19 19:22:08 UTC (rev 2829) @@ -8,7 +8,7 @@ rebalance_on = NULL, training_period = NULL, trailing_periods = NULL) - optimize.portfolio.rebalancing(R, portfolio, + optimize.portfolio.rebalancing(R, portfolio = NULL, constraints = NULL, objectives = NULL, optimize_method = c("DEoptim", "random", "ROI"), search_size = 20000, trace = FALSE, ..., rp = NULL, Added: pkg/PortfolioAnalytics/man/var.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/var.portfolio.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/var.portfolio.Rd 2013-08-19 19:22:08 UTC (rev 2829) @@ -0,0 +1,23 @@ +\name{var.portfolio} +\alias{var.portfolio} +\title{Calculate portfolio variance} +\usage{ + var.portfolio(R, weights) +} +\arguments{ + \item{R}{xts object of asset returns} + + \item{weights}{vector of asset weights} +} +\value{ + numeric value of the portfolio variance +} +\description{ + This function is used to calculate the portfolio variance + via a call to constrained_objective when var is an object + for mean variance or quadratic utility optimization. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Tue Aug 20 02:12:41 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 02:12:41 +0200 (CEST) Subject: [Returnanalytics-commits] r2830 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130820001241.5FCDA185A8A@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-20 02:12:41 +0200 (Tue, 20 Aug 2013) New Revision: 2830 Added: pkg/PortfolioAnalytics/man/quadratic_utility_objective.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/objective.R pkg/PortfolioAnalytics/man/add.objective.Rd Log: adding quadratic_utility as an objective type Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-19 19:22:08 UTC (rev 2829) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-20 00:12:41 UTC (rev 2830) @@ -72,6 +72,7 @@ export(print.optimize.portfolio.random) export(print.optimize.portfolio.ROI) export(print.portfolio) +export(quadratic_utility_objective) export(random_portfolios_v1) export(random_portfolios_v2) export(random_portfolios) Modified: pkg/PortfolioAnalytics/R/objective.R =================================================================== --- pkg/PortfolioAnalytics/R/objective.R 2013-08-19 19:22:08 UTC (rev 2829) +++ pkg/PortfolioAnalytics/R/objective.R 2013-08-20 00:12:41 UTC (rev 2830) @@ -130,7 +130,7 @@ # to add objectives to a portfolio object instead of a constraint object. if (!is.portfolio(portfolio)) {stop("portfolio passed in is not of class portfolio")} - if (!hasArg(name)) stop("you must supply a name for the objective") + if (type != "quadratic_utility" & !hasArg(name)) stop("you must supply a name for the objective") if (!hasArg(type)) stop("you must supply a type of objective to create") if (!hasArg(enabled)) enabled=TRUE if (!hasArg(arguments) | is.null(arguments)) arguments<-list() @@ -176,7 +176,12 @@ arguments=arguments, ...=...) }, - + qu=, quadratic_utility = {tmp_objective = quadratic_utility_objective(enabled=enabled, ...=...) + # quadratic_utility_objective returns a list of a return_objective and a portfolio_risk_objective + # we just need to combine it to the portfolio$objectives slot and return the portfolio + portfolio$objectives <- c(portfolio$objectives, tmp_objective) + return(portfolio) + }, null = {return(portfolio)} # got nothing, default to simply returning ) # end objective type switch @@ -201,7 +206,7 @@ #' #' @param portfolio an object of type 'portfolio' to add the objective to, specifying the portfolio for the optimization, see \code{\link{portfolio}} #' @param constraints a 'v1_constraint' object for backwards compatibility, see \code{\link{constraint}} -#' @param type character type of the objective to add or update, currently 'return','risk', or 'risk_budget' +#' @param type character type of the objective to add or update, currently 'return','risk', 'risk_budget', or 'quadratic_utility' #' @param name name of the objective, should correspond to a function, though we will try to make allowances #' @param arguments default arguments to be passed to an objective function when executed #' @param enabled TRUE/FALSE @@ -383,6 +388,28 @@ return(Objective) } # end minmax_objective constructor +#' constructor for quadratic utility objective +#' +#' This function calls \code{\link{return_objective}} and \code{\link{portfolio_risk_objective}} +#' to create a list of the objectives to be added to the portfolio. +#' +#' @param risk_aversion risk_aversion (i.e. lambda) parameter to penalize variance +#' @param target target mean return value +#' @param enabled TRUE/FALSE, default enabled=TRUE +#' @return a list of two elements +#' \itemize{ +#' \item{\code{return_objective}} +#' \item{\code{portfolio_risk_objective}} +#' } +#' @author Ross Bennett +#' @export +quadratic_utility_objective <- function(risk_aversion=1, target=NULL, enabled=TRUE){ + qu <- list() + qu[[1]] <- return_objective(name="mean", target=target, enabled=enabled) + qu[[2]] <- portfolio_risk_objective(name="var", risk_aversion=risk_aversion, enabled=enabled) + return(qu) +} # end quadratic utility objective constructor + #' Insert a list of objectives into the objectives slot of a portfolio object #' #' @param portfolio object of class 'portfolio' Modified: pkg/PortfolioAnalytics/man/add.objective.Rd =================================================================== --- pkg/PortfolioAnalytics/man/add.objective.Rd 2013-08-19 19:22:08 UTC (rev 2829) +++ pkg/PortfolioAnalytics/man/add.objective.Rd 2013-08-20 00:12:41 UTC (rev 2830) @@ -23,7 +23,8 @@ compatibility, see \code{\link{constraint}}} \item{type}{character type of the objective to add or - update, currently 'return','risk', or 'risk_budget'} + update, currently 'return','risk', 'risk_budget', or + 'quadratic_utility'} \item{name}{name of the objective, should correspond to a function, though we will try to make allowances} Added: pkg/PortfolioAnalytics/man/quadratic_utility_objective.Rd =================================================================== --- pkg/PortfolioAnalytics/man/quadratic_utility_objective.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/quadratic_utility_objective.Rd 2013-08-20 00:12:41 UTC (rev 2830) @@ -0,0 +1,29 @@ +\name{quadratic_utility_objective} +\alias{quadratic_utility_objective} +\title{constructor for quadratic utility objective} +\usage{ + quadratic_utility_objective(risk_aversion = 1, + target = NULL, enabled = TRUE) +} +\arguments{ + \item{risk_aversion}{risk_aversion (i.e. lambda) + parameter to penalize variance} + + \item{target}{target mean return value} + + \item{enabled}{TRUE/FALSE, default enabled=TRUE} +} +\value{ + a list of two elements \itemize{ + \item{\code{return_objective}} + \item{\code{portfolio_risk_objective}} } +} +\description{ + This function calls \code{\link{return_objective}} and + \code{\link{portfolio_risk_objective}} to create a list + of the objectives to be added to the portfolio. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Tue Aug 20 06:18:38 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 06:18:38 +0200 (CEST) Subject: [Returnanalytics-commits] r2831 - in pkg/PortfolioAnalytics: R man Message-ID: <20130820041838.8E673185118@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-20 06:18:38 +0200 (Tue, 20 Aug 2013) New Revision: 2831 Modified: pkg/PortfolioAnalytics/R/charts.DE.R pkg/PortfolioAnalytics/R/charts.GenSA.R pkg/PortfolioAnalytics/R/charts.PSO.R pkg/PortfolioAnalytics/R/charts.ROI.R pkg/PortfolioAnalytics/R/charts.RP.R pkg/PortfolioAnalytics/man/chart.Scatter.DE.Rd pkg/PortfolioAnalytics/man/chart.Scatter.GenSA.Rd pkg/PortfolioAnalytics/man/chart.Scatter.ROI.Rd pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd pkg/PortfolioAnalytics/man/chart.Scatter.pso.Rd pkg/PortfolioAnalytics/man/chart.Weights.DE.Rd pkg/PortfolioAnalytics/man/chart.Weights.GenSA.Rd pkg/PortfolioAnalytics/man/charts.DE.Rd pkg/PortfolioAnalytics/man/charts.GenSA.Rd pkg/PortfolioAnalytics/man/charts.ROI.Rd pkg/PortfolioAnalytics/man/charts.RP.Rd pkg/PortfolioAnalytics/man/charts.pso.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd Log: updating chart methods and documentation Modified: pkg/PortfolioAnalytics/R/charts.DE.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 00:12:41 UTC (rev 2830) +++ pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 04:18:38 UTC (rev 2831) @@ -10,8 +10,11 @@ # ############################################################################### -#' boxplot of the weight distributions in the random portfolios -#' @param DE set of random portfolios created by \code{\link{optimize.portfolio}} +#' boxplot of the weights of the optimal portfolios +#' +#' Chart the optimal weights and upper and lower bounds on weights of a portfolio run via \code{\link{optimize.portfolio}} +#' +#' @param DE optimal portfolio object created by \code{\link{optimize.portfolio}} #' @param neighbors set of 'neighbor' portfolios to overplot #' @param las numeric in \{0,1,2,3\}; the style of axis labels #' \describe{ @@ -29,8 +32,7 @@ #' @seealso \code{\link{optimize.portfolio}} #' @export chart.Weights.DE <- function(DE, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ - # Specific to the output of the random portfolio code with constraints - # @TODO: check that DE is of the correct class + # Specific to the output of optimize.portfolio with optimize_method="DEoptim" if(!inherits(DE, "optimize.portfolio.DEoptim")) stop("DE must be of class 'optimize.portfolio.DEoptim'") columnnames = names(DE$weights) @@ -86,14 +88,11 @@ axis(2, cex.axis = cex.axis, col = element.color) axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) box(col = element.color) - } #' classic risk return scatter of DEoptim results #' #' @param DE set of portfolios created by \code{\link{optimize.portfolio}} -#' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the objective function where required -#' @param portfolio an object of type "portfolio" specifying the constraints and objectives for the optimization #' @param neighbors set of 'neighbor' portfolios to overplot, see Details in \code{\link{charts.DE}} #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis @@ -102,177 +101,183 @@ #' @param element.color color for the default plot scatter points #' @seealso \code{\link{optimize.portfolio}} #' @export -chart.Scatter.DE <- function(DE, R=NULL, portfolio=NULL, neighbors = NULL, return.col='mean', risk.col='ES', ..., element.color = "darkgray", cex.axis=0.8){ - # more or less specific to the output of the random portfolio code with constraints - # will work to a point with other functions, such as optimize.porfolio.parallel - # there's still a lot to do to improve this. - xtract = extractStats(DE) +chart.Scatter.DE <- function(DE, neighbors = NULL, return.col='mean', risk.col='ES', ..., element.color = "darkgray", cex.axis=0.8){ + # more or less specific to the output of the DEoptim portfolio code with constraints + # will work to a point with other functions, such as optimize.porfolio.parallel + # there's still a lot to do to improve this. + + if(!inherits(DE, "optimize.portfolio.DEoptim")) stop("DE must be of class 'optimize.portfolio.DEoptim'") + + R <- DE$R + portfolio <- DE$portfolio + xtract = extractStats(DE) + columnnames = colnames(xtract) + #return.column = grep(paste("objective_measures",return.col,sep='.'),columnnames) + return.column = pmatch(return.col,columnnames) + if(is.na(return.column)) { + return.col = paste(return.col,return.col,sep='.') + return.column = pmatch(return.col,columnnames) + } + #risk.column = grep(paste("objective_measures",risk.col,sep='.'),columnnames) + risk.column = pmatch(risk.col,columnnames) + if(is.na(risk.column)) { + risk.col = paste(risk.col,risk.col,sep='.') + risk.column = pmatch(risk.col,columnnames) + } + + # if(is.na(return.column) | is.na(risk.column)) stop(return.col,' or ',risk.col, ' do not match extractStats output') + + # If the user has passed in return.col or risk.col that does not match extractStats output + # This will give the flexibility of passing in return or risk metrics that are not + # objective measures in the optimization. This may cause issues with the "neighbors" + # functionality since that is based on the "out" column + if(is.na(return.column) | is.na(risk.column)){ + return.col <- gsub("\\..*", "", return.col) + risk.col <- gsub("\\..*", "", risk.col) + warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') + # Get the matrix of weights for applyFUN + wts_index <- grep("w.", columnnames) + wts <- xtract[, wts_index] + if(is.na(return.column)){ + tmpret <- applyFUN(R=R, weights=wts, FUN=return.col) + xtract <- cbind(tmpret, xtract) + colnames(xtract)[which(colnames(xtract) == "tmpret")] <- return.col + } + if(is.na(risk.column)){ + tmprisk <- applyFUN(R=R, weights=wts, FUN=risk.col) + xtract <- cbind(tmprisk, xtract) + colnames(xtract)[which(colnames(xtract) == "tmprisk")] <- risk.col + } columnnames = colnames(xtract) - #return.column = grep(paste("objective_measures",return.col,sep='.'),columnnames) return.column = pmatch(return.col,columnnames) if(is.na(return.column)) { - return.col = paste(return.col,return.col,sep='.') - return.column = pmatch(return.col,columnnames) + return.col = paste(return.col,return.col,sep='.') + return.column = pmatch(return.col,columnnames) } - #risk.column = grep(paste("objective_measures",risk.col,sep='.'),columnnames) risk.column = pmatch(risk.col,columnnames) if(is.na(risk.column)) { - risk.col = paste(risk.col,risk.col,sep='.') - risk.column = pmatch(risk.col,columnnames) + risk.col = paste(risk.col,risk.col,sep='.') + risk.column = pmatch(risk.col,columnnames) } - - # if(is.na(return.column) | is.na(risk.column)) stop(return.col,' or ',risk.col, ' do not match extractStats output') - - # If the user has passed in return.col or risk.col that does not match extractStats output - # This will give the flexibility of passing in return or risk metrics that are not - # objective measures in the optimization. This may cause issues with the "neighbors" - # functionality since that is based on the "out" column - if(is.na(return.column) | is.na(risk.column)){ - return.col <- gsub("\\..*", "", return.col) - risk.col <- gsub("\\..*", "", risk.col) - warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') - # Get the matrix of weights for applyFUN - wts_index <- grep("w.", columnnames) - wts <- xtract[, wts_index] - if(is.na(return.column)){ - tmpret <- applyFUN(R=R, weights=wts, FUN=return.col) - xtract <- cbind(tmpret, xtract) - colnames(xtract)[which(colnames(xtract) == "tmpret")] <- return.col + } + # print(colnames(head(xtract))) + + plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) + + if(!is.null(neighbors)){ + if(is.vector(neighbors)){ + if(length(neighbors)==1){ + # overplot nearby portfolios defined by 'out' + orderx = order(xtract[,"out"]) #TODO this won't work if the objective is anything other than mean + subsetx = head(xtract[orderx,], n=neighbors) + } else{ + # assume we have a vector of portfolio numbers + subsetx = xtract[neighbors,] } - if(is.na(risk.column)){ - tmprisk <- applyFUN(R=R, weights=wts, FUN=risk.col) - xtract <- cbind(tmprisk, xtract) - colnames(xtract)[which(colnames(xtract) == "tmprisk")] <- risk.col + points(subsetx[,risk.column], subsetx[,return.column], col="lightblue", pch=1) + } + if(is.matrix(neighbors) | is.data.frame(neighbors)){ + # the user has likely passed in a matrix containing calculated values for risk.col and return.col + rtc = pmatch(return.col,columnnames) + if(is.na(rtc)) { + rtc = pmatch(paste(return.col,return.col,sep='.'),columnnames) } - columnnames = colnames(xtract) - return.column = pmatch(return.col,columnnames) - if(is.na(return.column)) { - return.col = paste(return.col,return.col,sep='.') - return.column = pmatch(return.col,columnnames) + rsc = pmatch(risk.col,columnnames) + if(is.na(rsc)) { + risk.column = pmatch(paste(risk.col,risk.col,sep='.'),columnnames) } - risk.column = pmatch(risk.col,columnnames) - if(is.na(risk.column)) { - risk.col = paste(risk.col,risk.col,sep='.') - risk.column = pmatch(risk.col,columnnames) - } + for(i in 1:nrow(neighbors)) points(neighbors[i,rsc], neighbors[i,rtc], col="lightblue", pch=1) } - # print(colnames(head(xtract))) - - plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) - - if(!is.null(neighbors)){ - if(is.vector(neighbors)){ - if(length(neighbors)==1){ - # overplot nearby portfolios defined by 'out' - orderx = order(xtract[,"out"]) #TODO this won't work if the objective is anything other than mean - subsetx = head(xtract[orderx,], n=neighbors) - } else{ - # assume we have a vector of portfolio numbers - subsetx = xtract[neighbors,] - } - points(subsetx[,risk.column], subsetx[,return.column], col="lightblue", pch=1) + } + + # points(xtract[1,risk.column],xtract[1,return.column], col="orange", pch=16) # overplot the equal weighted (or seed) + #check to see if portfolio 1 is EW DE$random_portoflios[1,] all weights should be the same + # if(!isTRUE(all.equal(DE$random_portfolios[1,][1],1/length(DE$random_portfolios[1,]),check.attributes=FALSE))){ + #show both the seed and EW if they are different + #NOTE the all.equal comparison could fail above if the first element of the first portfolio is the same as the EW weight, + #but the rest is not, shouldn't happen often with real portfolios, only toy examples + # points(xtract[2,risk.column],xtract[2,return.column], col="green", pch=16) # overplot the equal weighted (or seed) + # } + + ## Draw solution trajectory + if(!is.null(R) & !is.null(portfolio)){ + w.traj = unique(DE$DEoutput$member$bestmemit) + rows = nrow(w.traj) + rr = matrix(nrow=rows, ncol=2) + ## maybe rewrite as an apply statement by row on w.traj + rtc = NULL + rsc = NULL + trajnames = NULL + for(i in 1:rows){ + + w = w.traj[i,] + x = unlist(constrained_objective(w=w, R=R, portfolio=portfolio, trace=TRUE)) + names(x)<-name.replace(names(x)) + if(is.null(trajnames)) trajnames<-names(x) + if(is.null(rsc)){ + rtc = pmatch(return.col,trajnames) + if(is.na(rtc)) { + rtc = pmatch(paste(return.col,return.col,sep='.'),trajnames) } - if(is.matrix(neighbors) | is.data.frame(neighbors)){ - # the user has likely passed in a matrix containing calculated values for risk.col and return.col - rtc = pmatch(return.col,columnnames) - if(is.na(rtc)) { - rtc = pmatch(paste(return.col,return.col,sep='.'),columnnames) - } - rsc = pmatch(risk.col,columnnames) - if(is.na(rsc)) { - risk.column = pmatch(paste(risk.col,risk.col,sep='.'),columnnames) - } - for(i in 1:nrow(neighbors)) points(neighbors[i,rsc], neighbors[i,rtc], col="lightblue", pch=1) + rsc = pmatch(risk.col,trajnames) + if(is.na(rsc)) { + rsc = pmatch(paste(risk.col,risk.col,sep='.'),trajnames) } + } + rr[i,1] = x[rsc] #'FIXME + rr[i,2] = x[rtc] #'FIXME } + colors2 = colorRamp(c("blue","lightblue")) + colortrail = rgb(colors2((0:rows)/rows),max=255) + for(i in 1:rows){ + points(rr[i,1], rr[i,2], pch=1, col = colortrail[rows-i+1]) + } -# points(xtract[1,risk.column],xtract[1,return.column], col="orange", pch=16) # overplot the equal weighted (or seed) - #check to see if portfolio 1 is EW DE$random_portoflios[1,] all weights should be the same -# if(!isTRUE(all.equal(DE$random_portfolios[1,][1],1/length(DE$random_portfolios[1,]),check.attributes=FALSE))){ - #show both the seed and EW if they are different - #NOTE the all.equal comparison could fail above if the first element of the first portfolio is the same as the EW weight, - #but the rest is not, shouldn't happen often with real portfolios, only toy examples -# points(xtract[2,risk.column],xtract[2,return.column], col="green", pch=16) # overplot the equal weighted (or seed) -# } - - ## Draw solution trajectory - if(!is.null(R) & !is.null(portfolio)){ - w.traj = unique(DE$DEoutput$member$bestmemit) - rows = nrow(w.traj) - rr = matrix(nrow=rows, ncol=2) - ## maybe rewrite as an apply statement by row on w.traj - rtc = NULL - rsc = NULL - trajnames = NULL - for(i in 1:rows){ - - w = w.traj[i,] - x = unlist(constrained_objective(w=w, R=R, portfolio=portfolio, trace=TRUE)) - names(x)<-name.replace(names(x)) - if(is.null(trajnames)) trajnames<-names(x) - if(is.null(rsc)){ - rtc = pmatch(return.col,trajnames) - if(is.na(rtc)) { - rtc = pmatch(paste(return.col,return.col,sep='.'),trajnames) - } - rsc = pmatch(risk.col,trajnames) - if(is.na(rsc)) { - rsc = pmatch(paste(risk.col,risk.col,sep='.'),trajnames) - } - } - rr[i,1] = x[rsc] #'FIXME - rr[i,2] = x[rtc] #'FIXME - } - colors2 = colorRamp(c("blue","lightblue")) - colortrail = rgb(colors2((0:rows)/rows),max=255) - for(i in 1:rows){ - points(rr[i,1], rr[i,2], pch=1, col = colortrail[rows-i+1]) - } - - for(i in 2:rows){ - segments(rr[i,1], rr[i,2], rr[i-1,1], rr[i-1,2],col = colortrail[rows-i+1], lty = 1, lwd = 2) - } - } else{ - message("Trajectory cannot be drawn because return object or constraints were not passed.") + for(i in 2:rows){ + segments(rr[i,1], rr[i,2], rr[i-1,1], rr[i-1,2],col = colortrail[rows-i+1], lty = 1, lwd = 2) } - - - ## @TODO: Generalize this to find column containing the "risk" metric - if(length(names(DE)[which(names(DE)=='constrained_objective')])) { - result.slot<-'constrained_objective' - } else { - result.slot<-'objective_measures' - } - objcols<-unlist(DE[[result.slot]]) - names(objcols)<-name.replace(names(objcols)) + } else{ + message("Trajectory cannot be drawn because return object or constraints were not passed.") + } + + + ## @TODO: Generalize this to find column containing the "risk" metric + if(length(names(DE)[which(names(DE)=='constrained_objective')])) { + result.slot<-'constrained_objective' + } else { + result.slot<-'objective_measures' + } + objcols<-unlist(DE[[result.slot]]) + names(objcols)<-name.replace(names(objcols)) + return.column = pmatch(return.col,names(objcols)) + if(is.na(return.column)) { + return.col = paste(return.col,return.col,sep='.') return.column = pmatch(return.col,names(objcols)) - if(is.na(return.column)) { - return.col = paste(return.col,return.col,sep='.') - return.column = pmatch(return.col,names(objcols)) - } + } + risk.column = pmatch(risk.col,names(objcols)) + if(is.na(risk.column)) { + risk.col = paste(risk.col,risk.col,sep='.') risk.column = pmatch(risk.col,names(objcols)) - if(is.na(risk.column)) { - risk.col = paste(risk.col,risk.col,sep='.') - risk.column = pmatch(risk.col,names(objcols)) - } - # risk and return metrics for the optimal weights if the RP object does not - # contain the metrics specified by return.col or risk.col - if(is.na(return.column) | is.na(risk.column)){ - return.col <- gsub("\\..*", "", return.col) - risk.col <- gsub("\\..*", "", risk.col) - # warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') - opt_weights <- DE$weights - ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) - risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) - points(risk, ret, col="blue", pch=16) #optimal - } else { - points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal - } - axis(1, cex.axis = cex.axis, col = element.color) - axis(2, cex.axis = cex.axis, col = element.color) - box(col = element.color) + } + # risk and return metrics for the optimal weights if the RP object does not + # contain the metrics specified by return.col or risk.col + if(is.na(return.column) | is.na(risk.column)){ + return.col <- gsub("\\..*", "", return.col) + risk.col <- gsub("\\..*", "", risk.col) + # warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') + opt_weights <- DE$weights + ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) + risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) + points(risk, ret, col="blue", pch=16) #optimal + } else { + points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal + } + axis(1, cex.axis = cex.axis, col = element.color) + axis(2, cex.axis = cex.axis, col = element.color) + box(col = element.color) } + #' scatter and weights chart for random portfolios #' #' \code{neighbors} may be specified in three ways. @@ -285,7 +290,6 @@ #' \code{risk.col},\code{return.col}, and weights columns all properly named. #' #' @param DE set of random portfolios created by \code{\link{optimize.portfolio}} -#' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the objective function where required #' @param ... any other passthru parameters #' @param risk.col string name of column to use for risk (horizontal axis) #' @param return.col string name of column to use for returns (vertical axis) @@ -295,13 +299,13 @@ #' \code{\link{optimize.portfolio}} #' \code{\link{extractStats}} #' @export -charts.DE <- function(DE, R=NULL, risk.col, return.col, neighbors=NULL, main="DEoptim.Portfolios", ...){ +charts.DE <- function(DE, risk.col, return.col, neighbors=NULL, main="DEoptim.Portfolios", ...){ # Specific to the output of the random portfolio code with constraints # @TODO: check that DE is of the correct class op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.DE(DE, R=R, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) + chart.Scatter.DE(DE, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) par(mar=c(2,4,0,2)) chart.Weights.DE(DE, main="", neighbors=neighbors, ...) par(op) @@ -323,12 +327,11 @@ #' \code{risk.col},\code{return.col}, and weights columns all properly named. #' @param x set of portfolios created by \code{\link{optimize.portfolio}} #' @param ... any other passthru parameters -#' @param R an optional an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the objective function where required #' @param risk.col string name of column to use for risk (horizontal axis) #' @param return.col string name of column to use for returns (vertical axis) #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio.DEoptim <- function(x, ..., R=NULL, return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { - charts.DE(DE=x, R=R, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) -} \ No newline at end of file +plot.optimize.portfolio.DEoptim <- function(x, ..., return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { + charts.DE(DE=x, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) +} Modified: pkg/PortfolioAnalytics/R/charts.GenSA.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 00:12:41 UTC (rev 2830) +++ pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 04:18:38 UTC (rev 2831) @@ -1,6 +1,9 @@ -#' boxplot of the weights in the portfolio + +#' boxplot of the weights of the optimal portfolios #' -#' @param GenSA object created by \code{\link{optimize.portfolio}} +#' Chart the optimal weights and upper and lower bounds on weights of a portfolio run via \code{\link{optimize.portfolio}} +#' +#' @param GenSA optimal portfolio object created by \code{\link{optimize.portfolio}} #' @param neighbors set of 'neighbor' portfolios to overplot #' @param las numeric in \{0,1,2,3\}; the style of axis labels #' \describe{ @@ -84,8 +87,7 @@ #' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' -#' @param ROI object created by \code{\link{optimize.portfolio}} -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric +#' @param GenSA object created by \code{\link{optimize.portfolio}} #' @param rp set of weights generated by \code{\link{random_portfolio}} #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis @@ -95,8 +97,11 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -chart.Scatter.GenSA <- function(GenSA, R, rp=NULL, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ +chart.Scatter.GenSA <- function(GenSA, rp=NULL, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ + if(!inherits(GenSA, "optimize.portfolio.GenSA")) stop("GenSA must be of class 'optimize.portfolio.GenSA'") + + R <- GenSA$R # If the user does not pass in rp, then we will generate random portfolios if(is.null(rp)){ permutations <- match.call(expand.dots=TRUE)$permutations @@ -126,7 +131,6 @@ #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' #' @param GenSA object created by \code{\link{optimize.portfolio}} -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric #' @param rp set of weights generated by \code{\link{random_portfolio}} #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis @@ -138,13 +142,13 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.GenSA <- function(GenSA, R, rp=NULL, return.col="mean", risk.col="StdDev", +charts.GenSA <- function(GenSA, rp=NULL, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ - # Specific to the output of the optimize_method=pso + # Specific to the output of the optimize_method=GenSA op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,2),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.GenSA(GenSA=GenSA, R=R, rp=rp, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + chart.Scatter.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) par(mar=c(2,4,0,2)) chart.Weights.GenSA(GenSA=GenSA, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") par(op) @@ -156,7 +160,6 @@ #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' #' @param GenSA object created by \code{\link{optimize.portfolio}} -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric #' @param rp set of weights generated by \code{\link{random_portfolio}} #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis @@ -168,6 +171,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.GenSA <- function(GenSA, R, rp=NULL, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ - charts.GenSA(GenSA=GenSA, R=R, rp=rp, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) +plot.optimize.portfolio.GenSA <- function(GenSA, rp=NULL, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ + charts.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 00:12:41 UTC (rev 2830) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 04:18:38 UTC (rev 2831) @@ -84,7 +84,6 @@ #' \code{risk.col} must be the name of a function used to compute the risk metric on the portfolio weights #' #' @param pso object created by \code{\link{optimize.portfolio}} -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis #' @param ... any other passthru parameters @@ -93,9 +92,9 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -chart.Scatter.pso <- function(pso, R, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ +chart.Scatter.pso <- function(pso, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ if(!inherits(pso, "optimize.portfolio.pso")) stop("pso must be of class 'optimize.portfolio.pso'") - + R <- pso$R # Object with the "out" value in the first column and the normalized weights # The first row is the optimal "out" value and the optimal weights tmp <- extractStats(pso) @@ -119,7 +118,6 @@ #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' #' @param pso object created by \code{\link{optimize.portfolio}} -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis #' @param ... any other passthru parameters @@ -130,13 +128,13 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.pso <- function(pso, R, return.col="mean", risk.col="StdDev", +charts.pso <- function(pso, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ # Specific to the output of the optimize_method=pso op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,2),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.pso(pso=pso, R=R, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + chart.Scatter.pso(pso=pso, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) par(mar=c(2,4,0,2)) chart.Weights.pso(pso=pso, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") par(op) @@ -148,7 +146,6 @@ #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' #' @param pso object created by \code{\link{optimize.portfolio}} -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis #' @param ... any other passthru parameters @@ -159,7 +156,7 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.pso <- function(pso, R, return.col="mean", risk.col="StdDev", +plot.optimize.portfolio.pso <- function(pso, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ - charts.pso(pso=pso, R=R, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) + charts.pso(pso=pso, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 00:12:41 UTC (rev 2830) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 04:18:38 UTC (rev 2831) @@ -86,9 +86,7 @@ #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' #' @param ROI object created by \code{\link{optimize.portfolio}} -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric -#' @param rp set of weights generated by \code{\link{random_portfolio}} -#' @param portfolio pass in a different portfolio object used in set.portfolio.moments +#' @param rp matrix of random portfolios generated by \code{\link{random_portfolio}} #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis #' @param ... any other passthru parameters @@ -97,8 +95,11 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -chart.Scatter.ROI <- function(ROI, R, rp=NULL, portfolio=NULL, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ +chart.Scatter.ROI <- function(ROI, rp=NULL, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ + if(!inherits(ROI, "optimize.portfolio.ROI")) stop("ROI must be of class 'optimize.portfolio.ROI'") + + R <- ROI$R # If the user does not pass in rp, then we will generate random portfolios if(is.null(rp)){ permutations <- match.call(expand.dots=TRUE)$permutations @@ -131,9 +132,7 @@ #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' #' @param ROI object created by \code{\link{optimize.portfolio}} -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns, used to recalulate the risk and return metric #' @param rp set of weights generated by \code{\link{random_portfolio}} -#' @param portfolio pass in a different portfolio object used in set.portfolio.moments #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param ... any other passthru parameters @@ -144,13 +143,14 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.ROI <- function(ROI, R, rp=NULL, portfolio=NULL, risk.col="StdDev", return.col="mean", [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2831 From noreply at r-forge.r-project.org Tue Aug 20 12:55:02 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 12:55:02 +0200 (CEST) Subject: [Returnanalytics-commits] r2832 - in pkg/Meucci: . R data demo man Message-ID: <20130820105502.E27F9184E39@r-forge.r-project.org> Author: xavierv Date: 2013-08-20 12:55:02 +0200 (Tue, 20 Aug 2013) New Revision: 2832 Added: pkg/Meucci/R/ FitOrnsteinUhlenbeck.R pkg/Meucci/demo/S_StatArbSwaps.R pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/NAMESPACE pkg/Meucci/data/swapParRates.Rda Log: - added S_StatArbSwaps demo script from chapter 3 and its associated data and functions Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-20 04:18:38 UTC (rev 2831) +++ pkg/Meucci/DESCRIPTION 2013-08-20 10:55:02 UTC (rev 2832) @@ -87,3 +87,5 @@ 'FitMultivariateGarch.R' 'MvnRnd.R' 'MleRecursionForStudentT.R' + ' + FitOrnsteinUhlenbeck.R' Modified: pkg/Meucci/NAMESPACE =================================================================== --- pkg/Meucci/NAMESPACE 2013-08-20 04:18:38 UTC (rev 2831) +++ pkg/Meucci/NAMESPACE 2013-08-20 10:55:02 UTC (rev 2832) @@ -12,6 +12,7 @@ export(EntropyProg) export(FitExpectationMaximization) export(FitMultivariateGarch) +export(FitOrnsteinUhlenbeck) export(garch1f4) export(garch2f8) export(GenerateLogNormalDistribution) Added: pkg/Meucci/R/ FitOrnsteinUhlenbeck.R =================================================================== --- pkg/Meucci/R/ FitOrnsteinUhlenbeck.R (rev 0) +++ pkg/Meucci/R/ FitOrnsteinUhlenbeck.R 2013-08-20 10:55:02 UTC (rev 2832) @@ -0,0 +1,55 @@ +#' Fit a multivariate OU process at estimation step tau, as described in A. Meucci +#' "Risk and Asset Allocation", Springer, 2005 +#' +#' @param Y : [matrix] (T x N) +#' @param tau : [scalar] time step +#' +#' @return Mu : [vector] long-term means +#' @return Th : [matrix] whose eigenvalues have positive real part / mean reversion speed +#' @return Sig : [matrix] Sig = S * S', covariance matrix of Brownian motions +#' +#' @note +#' o dY_t = -Th * (Y_t - Mu) * dt + S * dB_t where +#' o dB_t: vector of Brownian motions +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "EfficientFrontierReturns.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +FitOrnsteinUhlenbeck = function( Y, tau ) +{ + T = nrow(Y); + N = ncol(Y); + + X = Y[ -1, ]; + F = cbind( matrix( 1, T-1, 1 ), Y[ -nrow(Y), ] ); + E_XF = t(X) %*% F / T; + E_FF = t(F) %*% F / T; + B = E_XF %*% solve( E_FF ); + if( length( B[ , -1 ] ) != 1 ) + { + Th = -logm( B[ , -1 ] ) / tau; + + }else + { + Th = -log( B[ , -1 ] ) / tau; + } + + Mu = solve( diag( 1, N ) - B[ , -1 ] ) %*% B[ , 1 ] ; + + U = F %*% t(B) - X; + + Sig_tau = cov(U); + + N = length(Mu); + TsT = kron( Th, diag( 1, N ) ) + kron( diag( 1, N ), Th ); + + VecSig_tau = matrix(Sig_tau, N^2, 1); + VecSig = ( solve( diag( 1, N^2 ) - expm( -TsT * tau ) ) %*% TsT ) %*% VecSig_tau; + Sig = matrix( VecSig, N, N ); + + return( list( Mu = Mu, Theta = Th, Sigma = Sig ) ) +} \ No newline at end of file Modified: pkg/Meucci/data/swapParRates.Rda =================================================================== (Binary files differ) Added: pkg/Meucci/demo/S_StatArbSwaps.R =================================================================== --- pkg/Meucci/demo/S_StatArbSwaps.R (rev 0) +++ pkg/Meucci/demo/S_StatArbSwaps.R 2013-08-20 10:55:02 UTC (rev 2832) @@ -0,0 +1,69 @@ +#' This script search for cointegrated stat-arb strategies among swap contracts, as described in A. Meucci, +#' "Risk and Asset Allocation", Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_StatArbSwaps.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +# TODO: Check the loadings of the principal components analysis, fix the date ticks on the plots. + +################################################################################################################## +### Load data +load("../data/swapParRates.Rda"); + +################################################################################################################## +### Estimate covariance and PCA decomposition +S = cov( swapParRates$Rates ); +PC = princomp( covmat=S ); +E = PC$loadings +Lam = ( PC$sdev )^2 +################################################################################################################## +### Set up dates ticks +dev.new(); +h = plot(swapParRates$Dates, swapParRates$Dates); +XTick = NULL; +years = as.numeric(format(swapParRates$Dates[1],"%Y")) : as.numeric(format(swapParRates$Dates[length(swapParRates$Dates)],"%Y")) + + +for( n in years ) +{ + XTick = cbind( XTick, datenum(n,1,1) ); ##ok +} + +a = min(swapParRates$Dates); +b = max(swapParRates$Dates); +X_Lim = cbind( a - 0.01 * ( b-a ), b + 0.01 * ( b - a ) ); + +################################################################################################################## +### Plots +nLam = length(Lam); +Thetas = matrix( NaN, nLam, 1); +for( n in 1 : nLam ) +{ + Y = swapParRates$Rates %*% E[ , n ] * 10000; + FOU = FitOrnsteinUhlenbeck(Y, 1/252); + Sd_Y = sqrt( FOU$Sigma / (2 * FOU$Theta)); + Thetas[n] = FOU$Theta; + + dev.new(); + current_line = array( Y[ length(Y) ], length(swapParRates$Dates ) ); + Mu_line = array( FOU$Mu, length(swapParRates$Dates) ); + Z_line_up = Mu_line + Sd_Y[1]; + Z_line_dn = Mu_line - Sd_Y[1]; + + plot( swapParRates$Dates , Y, "l", xlab = "year", ylab = "basis points", main = paste( "eigendirection n. ", n, ", theta = ", FOU$Theta ) ); + lines( swapParRates$Dates, Mu_line, col = "blue" ); + lines( swapParRates$Dates, Z_line_up, col = "red" ); + lines( swapParRates$Dates, Z_line_dn, col = "red" ); + lines( swapParRates$Dates, current_line, col = "green"); + + #set(gca(), 'xlim', X_Lim, 'XTick', XTick); + #datetick('x','yy','keeplimits','keepticks'); + #grid off; + #title(['eigendirection n. ' num2str(n) ', theta = ' num2str(Theta)],'FontWeight','bold'); +} + +dev.new(); +plot( 1:length( Lam ), Thetas, "l", xlab = " eigendirection n.", ylab = "theta" ); Added: pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd =================================================================== --- pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd (rev 0) +++ pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd 2013-08-20 10:55:02 UTC (rev 2832) @@ -0,0 +1,38 @@ +\name{FitOrnsteinUhlenbeck} +\alias{FitOrnsteinUhlenbeck} +\title{Fit a multivariate OU process at estimation step tau, as described in A. Meucci +"Risk and Asset Allocation", Springer, 2005} +\usage{ + FitOrnsteinUhlenbeck(Y, tau) +} +\arguments{ + \item{Y}{: [matrix] (T x N)} + + \item{tau}{: [scalar] time step} +} +\value{ + Mu : [vector] long-term means + + Th : [matrix] whose eigenvalues have positive real part / + mean reversion speed + + Sig : [matrix] Sig = S * S', covariance matrix of + Brownian motions +} +\description{ + Fit a multivariate OU process at estimation step tau, as + described in A. Meucci "Risk and Asset Allocation", + Springer, 2005 +} +\note{ + o dY_t = -Th * (Y_t - Mu) * dt + S * dB_t where o dB_t: + vector of Brownian motions +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "EfficientFrontierReturns.m" +} + From noreply at r-forge.r-project.org Tue Aug 20 13:31:43 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 13:31:43 +0200 (CEST) Subject: [Returnanalytics-commits] r2833 - pkg/Meucci/data Message-ID: <20130820113143.E6D1E184E39@r-forge.r-project.org> Author: xavierv Date: 2013-08-20 13:31:43 +0200 (Tue, 20 Aug 2013) New Revision: 2833 Added: pkg/Meucci/data/db.Rda Log: - added db data file Added: pkg/Meucci/data/db.Rda =================================================================== (Binary files differ) Property changes on: pkg/Meucci/data/db.Rda ___________________________________________________________________ Added: svn:mime-type + application/octet-stream From noreply at r-forge.r-project.org Tue Aug 20 13:46:18 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 13:46:18 +0200 (CEST) Subject: [Returnanalytics-commits] r2834 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . R man vignettes Message-ID: <20130820114618.9084018548E@r-forge.r-project.org> Author: shubhanm Date: 2013-08-20 13:46:18 +0200 (Tue, 20 Aug 2013) New Revision: 2834 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Gsoc-iid.Rproj pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDD.Opt.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/SterlingRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Normalized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/SterlingRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV-Graph10.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV.rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ConditionalDrawdown-Graph10.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ConditionalDrawdown.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ConditionalDrawdown.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/GLMReturn-Graph1.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/GLMReturn-Graph10.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/GLMReturn.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/GLMReturn.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/GLMSmoothIndex.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/GLMSmoothIndex.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/LoSharpe.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/LoSharpeRatio.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/LoSharpeRatio.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/MaximumLoss.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/NormCalmar-Graph10.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/NormCalmar.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/NormCalmar.rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/OkunevWhite-Graph1.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/OkunevWhite-Graph10.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/OkunevWhite.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/OkunevWhite.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/Rplots.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ShaneAcarMaxLoss-003.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ShaneAcarMaxLoss.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ShaneAcarMaxLoss.pdf Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDDopt.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.Autocorrelation.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.ComparitiveReturn.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.NormDD.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.UnsmoothReturn.Rd Log: man/*.Rd files as well as /vignettes // for all codes written Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/Gsoc-iid.Rproj =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/Gsoc-iid.Rproj (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/Gsoc-iid.Rproj 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,17 @@ +Version: 1.0 + +RestoreWorkspace: Yes +SaveWorkspace: Yes +AlwaysSaveHistory: Yes + +EnableCodeIndexing: Yes +UseSpacesForTab: Yes +NumSpacesForTab: 2 +Encoding: UTF-8 + +RnwWeave: Sweave +LaTeX: pdfLaTeX + +BuildType: Package +PackageInstallArgs: --no-multiarch +PackageRoxygenize: rd Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -1,5 +1,5 @@ #' @title Autocorrleation adjusted Standard Deviation -#' +#' @description Incorporating the component of lagged autocorrelation factor into adjusted time scale standard deviation translation #' @aliases sd.multiperiod sd.annualized StdDev.annualized #' @param x an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,89 @@ +#' @title Acar and Shane Maximum Loss +#' +#'@description To get some insight on the relationships between maximum drawdown per unit of volatility +#'and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. +#' We have simulated cash flows over a period of 36 monthly returns and measured maximum +#'drawdown for varied levels of annualised return divided by volatility varying from minus +#' two to two by step of 0.1. The process has been repeated six thousand times. +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @references Maximum Loss and Maximum Drawdown in Financial Markets,\emph{International Conference Sponsored by BNP and Imperial College on: +#' Forecasting Financial Markets, London, United Kingdom, May 1997} +#' @keywords Maximum Loss Simulared Drawdown +#' @examples +#' library(PerformanceAnalytics) +#' AcarSim(edhec) +#' @rdname AcarSim +#' @export +AcarSim <- + function(R) + { + R = checkData(Ra, method="xts") + # Get dimensions and labels + # simulated parameters using edhec data +mu=mean(Return.annualized(edhec)) +monthly=(1+mu)^(1/12)-1 +sig=StdDev.annualized(edhec[,1])[1]; +T= 36 +j=1 +dt=1/T +nsim=6000; +thres=4; +r=matrix(0,nsim,T+1) +monthly = 0 +r[,1]=monthly; +# Sigma 'monthly volatiltiy' will be the varying term +ratio= seq(-2, 2, by=.1); +len = length(ratio) +ddown=array(0, dim=c(nsim,len,thres)) +fddown=array(0, dim=c(len,thres)) +Z <- array(0, c(len)) +for(i in 1:len) +{ + monthly = sig*ratio[i]; + + for(j in 1:nsim) +{ + dz=rnorm(T) + + + r[j,2:37]=monthly+(sig*dz*sqrt(3*dt)) + + ddown[j,i,1]= ES((r[j,]),.99) + ddown[j,i,1][is.na(ddown[j,i,1])] <- 0 + fddown[i,1]=fddown[i,1]+ddown[j,i,1] + ddown[j,i,2]= ES((r[j,]),.95) + ddown[j,i,2][is.na(ddown[j,i,2])] <- 0 + fddown[i,2]=fddown[i,2]+ddown[j,i,2] + ddown[j,i,3]= ES((r[j,]),.90) + ddown[j,i,3][is.na(ddown[j,i,3])] <- 0 + fddown[i,3]=fddown[i,3]+ddown[j,i,3] + ddown[j,i,4]= ES((r[j,]),.85) + ddown[j,i,4][is.na(ddown[j,i,4])] <- 0 + fddown[i,4]=fddown[i,4]+ddown[j,i,4] + assign("last.warning", NULL, envir = baseenv()) +} +} +plot(((fddown[,1])/(sig*nsim)),xlab="Annualised Return/Volatility from [-2,2]",ylab="Maximum Drawdown/Volatility",type='o',col="blue") +lines(((fddown[,2])/(sig*nsim)),type='o',col="pink") +lines(((fddown[,3])/(sig*nsim)),type='o',col="green") +lines(((fddown[,4])/(sig*nsim)),type='o',col="red") +legend(32,-4, c("%99", "%95", "%90","%85"), col = c("blue","pink","green","red"), text.col= "black", + lty = c(2, -1, 1), pch = c(-1, 3, 4), merge = TRUE, bg='gray90') + +title("Maximum Drawdown/Volatility as a function of Return/Volatility +36 monthly returns simulated 6,000 times") +} + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: AcarSim.R 2163 2012-07-16 00:30:19Z braverock $ +# +############################################################################### \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDD.Opt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDD.Opt.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDD.Opt.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,50 @@ +#' @title Chekhlov Conditional Drawdown at Risk Optimization +#' +#' @description A new one-parameter family of risk measures called Conditional Drawdown (CDD) has +#'been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance +#' parameter, in the case of a single sample path, drawdown functional is defineed as +#'the mean of the worst (1 - \eqn{\alpha})% drawdowns. +#'@details This section formulates a portfolio optimization problem with drawdown risk measure and suggests efficient optimization techniques for its solving. Optimal asset +#' allocation considers: +#' \enumerate{ +#' \item Generation of sample paths for the assets' rates of return. +#' \item Uncompounded cumulative portfolio rate of return rather than compounded one. +#' } +#' @param Ra return vector of the portfolio +#' @param p confidence interval +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} +#' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 +#' @keywords Conditional Drawdown models +#' @examples +#' +#'library(PerformanceAnalytics) +#' data(edhec) +#' CDDopt(edhec) +#' @rdname CDD.Opt +#' @export + +CDD.Opt = function(rmat, alpha=0.05, rmin=0, wmin=0, wmax=1, weight.sum=1) +{ + require(Rglpk) + n = ncol(rmat) # number of assets + s = nrow(rmat) # number of scenarios i.e. periods + averet = colMeans(rmat) + # creat objective vector, constraint matrix, constraint rhs + Amat = rbind(cbind(rbind(1,averet),matrix(data=0,nrow=2,ncol=s+1)), + cbind(rmat,diag(s),1)) + objL = c(rep(0,n), as.numeric(Cdrawdown(rmat,.9)), -1) + bvec = c(weight.sum,rmin,rep(0,s)) + # direction vector + dir.vec = c("==",">=",rep(">=",s)) + # bounds on weights + bounds = list(lower = list(ind = 1:n, val = rep(wmin,n)), + upper = list(ind = 1:n, val = rep(wmax,n))) + res = Rglpk_solve_LP(obj=objL, mat=Amat, dir=dir.vec, rhs=bvec, + types=rep("C",length(objL)), max=T, bounds=bounds) + w = as.numeric(res$solution[1:n]) + return(list(w=w,status=res$status)) +} +#' Guy Yollin work +#' +#' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDDopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDDopt.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDDopt.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -1,4 +1,29 @@ -cDDOpt = function(rmat, alpha=0.05, rmin=0, wmin=0, wmax=1, weight.sum=1) +#' @title Chekhlov Conditional Drawdown at Risk +#' +#' @description A new one-parameter family of risk measures called Conditional Drawdown (CDD) has +#'been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance +#' parameter, in the case of a single sample path, drawdown functional is defineed as +#'the mean of the worst (1 - \eqn{\alpha})% drawdowns. +#'@details This section formulates a portfolio optimization problem with drawdown risk measure and suggests e???cient optimization techniques for its solving. Optimal asset +#' allocation considers: +#' 1) Generation of sample paths for the assets' rates of return. +#' 2) Uncompounded cumulative portfolio rate of return rather than compounded one. +#' +#' @param Ra return vector of the portfolio +#' @param p confidence interval +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} +#' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 +#' @keywords Conditional Drawdown models +#' @examples +#' +#' library(PerformanceAnalytics) +#' data(edhec) +#' CDDopt(edhec) +#' @rdname Cdrawdown +#' @export + +CDDOpt = function(rmat, alpha=0.05, rmin=0, wmin=0, wmax=1, weight.sum=1) { require(Rglpk) n = ncol(rmat) # number of assets Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -1,20 +1,18 @@ -#' Chekhlov Conditional Drawdown at Risk +#' @title Chekhlov Conditional Drawdown at Risk #' -#' A new one-parameter family of risk measures called Conditional Drawdown (CDD) has +#' @description A new one-parameter family of risk measures called Conditional Drawdown (CDD) has #'been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance -#' parameter, in the case of a single sample path, drawdown functional is de???ned as -#'the mean of the worst 100% drawdowns. The CDD measure generalizes the -#'notion of the drawdown functional to a multi-scenario case and can be considered as a +#' parameter, in the case of a single sample path, drawdown functional is defineed as +#'the mean of the worst (1 - \eqn{\alpha})% drawdowns. +#'@details The CDD measure generalizes the notion of the drawdown functional to a multi-scenario case and can be considered as a #'generalization of deviation measure to a dynamic case. The CDD measure includes the -#'Maximal Drawdown and Average Drawdown as its limiting cases. -#' -#' The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD +#'Maximal Drawdown and Average Drawdown as its limiting cases. The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD #' The model develops concept of drawdown measure by generalizing the notion #' of the CDD to the case of several sample paths for portfolio uncompounded rate #' of return. #' @param Ra return vector of the portfolio #' @param p confidence interval -#' @author R Project +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} #' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 #' @keywords Conditional Drawdown models Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,69 @@ +#' @title Normalized Calmar reward/risk ratio +#' +#' @description Normalized Calmar and Sterling Ratios are yet another method of creating a +#' risk-adjusted measure for ranking investments similar to the Sharpe Ratio. +#' +#' @details +#' Both the Normalized Calmar and the Sterling ratio are the ratio of annualized return +#' over the absolute value of the maximum drawdown of an investment. The +#' Sterling ratio adds an excess risk measure to the maximum drawdown, +#' traditionally and defaulting to 10%. +#' +#' It is also traditional to use a three year return series for these +#' calculations, although the functions included here make no effort to +#' determine the length of your series. If you want to use a subset of your +#' series, you'll need to truncate or subset the input data to the desired +#' length. +#' +#' +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @param scale number of periods in a year (daily scale = 252, monthly scale = +#' 12, quarterly scale = 4) +#' @param excess for Sterling Ratio, excess amount to add to the max drawdown, +#' traditionally and default .1 (10\%) +#' @author Brian G. Peterson , Peter Carl , Shubhankit Mohan +#' @references Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct 2004. +#' @keywords ts multivariate distribution models +#' @examples +#' +#' data(managers) +#' CalmarRatio.Norm(managers[,1,drop=FALSE]) +#' CalmarRatio.Norm(managers[,1:6]) +#' @export +#' @rdname CalmarRatio.Norm + +CalmarRatio.Norm <- function (R, tau = 1,scale = NA) +{ # @author Brian G. Peterson + + # DESCRIPTION: + # Inputs: + # Ra: in this case, the function anticipates having a return stream as input, + # rather than prices. + # tau : scaled Time in Years + # scale: number of periods per year + # Outputs: + # This function returns a Calmar Ratio + + # FUNCTION: + + R = checkData(R) + if(is.na(scale)) { + freq = periodicity(R) + switch(freq$scale, + minute = {stop("Data periodicity too high")}, + hourly = {stop("Data periodicity too high")}, + daily = {scale = 252}, + weekly = {scale = 52}, + monthly = {scale = 12}, + quarterly = {scale = 4}, + yearly = {scale = 1} + ) + } + Time = nyears(R) + annualized_return = Return.annualized(R, scale=scale) + drawdown = abs(maxDrawdown(R)) + result = (annualized_return/drawdown)*(QP.Norm(R,Time)/QP.Norm(R,tau))*(tau/Time) + rownames(result) = "Normalized Calmar Ratio" + return(result) +} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -5,7 +5,15 @@ #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns #' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @keywords Expected Drawdown Using Brownian Motion Assumptions -#' @rdname EmaxDDGBM +#' @references An Analysis of the maximum drawdown measure,\emph{Journal of Applied Probability} +#' (2004) +#' @keywords Drawdown models Brownian Motion Assumptions +#' @examples +#' +#'library(PerformanceAnalytics) +#' data(edhec) +#' table.EmaxDDGBM(edhec) +#' @rdname table.EmaxDDGBM #' @export table.EMaxDDGBM <- function (R,digits =4) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -4,17 +4,17 @@ #' a sum of square of Moving Average lag coefficient. #' This measure is well known in the industrial organization literature as the #' Herfindahl index, a measure of the concentration of firms in a given industry. -#' The index is maximized when one coefficient is 1 and the rest are 0, in which case x ? 1: In the context of +#' The index is maximized when one coefficient is 1 and the rest are 0. In the context of #'smoothed returns, a lower value of x implies more smoothing, and the upper bound #'of 1 implies no smoothing, hence x is reffered as a ''smoothingindex' '. #' -#' \deqn{ R_t = {\mu} + {\beta}{{\delta}}_t+ \xi_t} +#' \deqn{ R_t = \mu + \beta \delta_t+ \xi_t} #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns -#' @author R +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @aliases Return.Geltner #' @references "An econometric model of serial correlation and illiquidity in -#' hedge fund returns" Mila Getmansky1, Andrew W. Lo*, Igor Makarov +#' hedge fund returns" Mila Getmansky, Andrew W. Lo, Igor Makarov #' #' @keywords ts multivariate distribution models non-iid #' @examples Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -23,7 +23,7 @@ #' @author Brian Peterson,Peter Carl, Shubhankit Mohan #' @references Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An econometric model of serial correlation and #' and illiquidity in hedge fund Returns},Journal of Financial Economics 74 (2004). -#' @keywords ts multivariate distribution models +#' @keywords ts multivariate distribution model Return.GLM <- function (Ra,q=3) { # @author Brian G. Peterson, Peter Carl Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/SterlingRatio.Norm.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/SterlingRatio.Norm.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/SterlingRatio.Norm.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,68 @@ +#' @title Normalized Sterling reward/risk ratio +#' +#' @description Normalized Sterling and Sterling Ratios are yet another method of creating a +#' risk-adjusted measure for ranking investments similar to the Sharpe Ratio. +#' +#' @details +#' Both the Normalized Sterling and the Calmar ratio are the ratio of annualized return +#' over the absolute value of the maximum drawdown of an investment. The +#' Sterling ratio adds an excess risk measure to the maximum drawdown, +#' traditionally and defaulting to 10%. +#' +#' It is also traditional to use a three year return series for these +#' calculations, although the functions included here make no effort to +#' determine the length of your series. If you want to use a subset of your +#' series, you'll need to truncate or subset the input data to the desired +#' length. +#' +#' +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @param scale number of periods in a year (daily scale = 252, monthly scale = +#' 12, quarterly scale = 4) +#' @param excess for Sterling Ratio, excess amount to add to the max drawdown, +#' traditionally and default .1 (10\%) +#' @author Brian G. Peterson , Peter Carl , Shubhankit Mohan +#' @references Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct 2004. +#' @keywords ts multivariate distribution models +#' @examples +#' +#' data(managers) +#' SterlingRatio.Norm(managers[,1,drop=FALSE]) +#' SterlingRatio.Norm(managers[,1:6]) +#' @export +#' @rdname SterlingRatio.Norm + +SterlingRatio.Norm <- + function (R, tau=1,scale=NA, excess=.1) + { # @author Brian G. Peterson + + # DESCRIPTION: + # Inputs: + # Ra: in this case, the function anticipates having a return stream as input, + # rather than prices. + # scale: number of periods per year + # Outputs: + # This function returns a Sterling Ratio + + # FUNCTION: + Time = nyears(R) + R = checkData(R) + if(is.na(scale)) { + freq = periodicity(R) + switch(freq$scale, + minute = {stop("Data periodicity too high")}, + hourly = {stop("Data periodicity too high")}, + daily = {scale = 252}, + weekly = {scale = 52}, + monthly = {scale = 12}, + quarterly = {scale = 4}, + yearly = {scale = 1} + ) + } + annualized_return = Return.annualized(R, scale=scale) + drawdown = abs(maxDrawdown(R)+excess) + result = annualized_return/drawdown*(QP.Norm(R,Time)/QP.Norm(R,tau))*(tau/Time) + rownames(result) = paste("Normalized Sterling Ratio (Excess = ", round(excess*100,0), "%)", sep="") + return(result) + } Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.Autocorrelation.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -1,6 +1,6 @@ #' @title Stacked Bar Autocorrelation Plot #' -#' @description A wrapper to create box and whiskers plot of comparitive inputs +#' @description A wrapper to create box and whiskers plot of lagged autocorrelation analysis #' #' @details We have also provided controls for all the symbols and lines in the chart. #' One default, set by \code{as.Tufte=TRUE}, will strip chartjunk and draw a Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -1,6 +1,6 @@ -#' Compenent Decomposition of Table of Unsmooth Returns for GLM Model +#' @title Compenent Decomposition of Table of Unsmooth Returns for GLM Model #' -#' Creates a table of comparitive changes in Normality Properties for Third +#' @description Creates a table of comparitive changes in Normality Properties for Third #' and Fourth Moment Vectors i.e. Skewness and Kurtosis for Orignal and Unsmooth #' Returns Respectively #' @@ -9,9 +9,9 @@ #' @param ci confidence interval, defaults to 95\% #' @param n number of series lags #' @param digits number of digits to round results to -#' @author R +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @keywords ts unsmooth GLM return models -#' +#' @rdname table.ComparitiveReturn.GLM #' @export table.ComparitiveReturn.GLM <- function (R, n = 3, digits = 4) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -1,6 +1,6 @@ -#' Compenent Decomposition of Table of Unsmooth Returns +#' @title Compenent Decomposition of Table of Unsmooth Returns #' -#' Creates a table of estimates of moving averages for comparison across +#' @description Creates a table of estimates of moving averages for comparison across #' multiple instruments or funds as well as their standard error and #' smoothing index #' @@ -10,9 +10,9 @@ #' @param n number of series lags #' @param p confidence level for calculation, default p=.99 #' @param digits number of digits to round results to -#' @author R +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @keywords ts smooth return models -#' +#' @rdname table.UnsmoothReturn #' @export table.UnsmoothReturn <- function (R, n = 3, p= 0.95, digits = 4) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R 2013-08-20 11:46:18 UTC (rev 2834) @@ -1,4 +1,6 @@ -#' To simulate net asset value (NAV) series where skewness and kurtosis are zero, +#'@title Generalised Lambda Distribution Simulated Drardown +#' +#'@description To simulate net asset value (NAV) series where skewness and kurtosis are zero, #' we draw sample returns from a lognormal return distribution. To capture skewness #' and kurtosis, we sample returns from a generalised lambda distribution.The values of #' skewness and excess kurtosis used were roughly consistent with the range of values we @@ -10,11 +12,14 @@ #' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns - -#' @author R -#' @keywords Expected Drawdown Using Brownian Motion Assumptions -#' -#' @export +#' @references Burghardt, G., and L. Liu, \emph{ It's the Autocorrelation, Stupid (November 2012) Newedge +#' working paper.} +#' \code{\link[stats]{}} \cr +#' \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @keywords Simulated Drawdown Using Brownian Motion Assumptions +#' @rdname table.normDD +#' @export table.NormDD <- function (R,digits =4) {# @author Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd 2013-08-20 11:46:18 UTC (rev 2834) @@ -20,7 +20,9 @@ \item{\dots}{any other passthru parameters} } \description{ - Autocorrleation adjusted Standard Deviation + Incorporating the component of lagged autocorrelation + factor into adjusted time scale standard deviation + translation } \author{ Peter Carl,Brian Peterson, Shubhankit Mohan Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,38 @@ +\name{AcarSim} +\alias{AcarSim} +\title{Acar and Shane Maximum Loss} +\usage{ + AcarSim(R) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} +} +\description{ + To get some insight on the relationships between maximum + drawdown per unit of volatility and mean return divided + by volatility, we have proceeded to Monte-Carlo + simulations. We have simulated cash flows over a period + of 36 monthly returns and measured maximum drawdown for + varied levels of annualised return divided by volatility + varying from minus two to two by step of 0.1. The process + has been repeated six thousand times. +} +\examples{ +library(PerformanceAnalytics) +AcarSim(edhec) +} +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan +} +\references{ + Maximum Loss and Maximum Drawdown in Financial + Markets,\emph{International Conference Sponsored by BNP + and Imperial College on: Forecasting Financial Markets, + London, United Kingdom, May 1997} +} +\keyword{Drawdown} +\keyword{Loss} +\keyword{Maximum} +\keyword{Simulared} + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,49 @@ +\name{CDD.Opt} +\alias{CDD.Opt} +\title{Chekhlov Conditional Drawdown at Risk Optimization} +\usage{ + CDD.Opt(rmat, alpha = 0.05, rmin = 0, wmin = 0, wmax = 1, + weight.sum = 1) +} +\arguments{ + \item{Ra}{return vector of the portfolio} + + \item{p}{confidence interval} +} +\description{ + A new one-parameter family of risk measures called + Conditional Drawdown (CDD) has been proposed. These + measures of risk are functionals of the portfolio + drawdown (underwater) curve considered in active + portfolio management. For some value of the tolerance + parameter, in the case of a single sample path, drawdown + functional is defineed as the mean of the worst (1 - + \eqn{\alpha})% drawdowns. +} +\details{ + This section formulates a portfolio optimization problem + with drawdown risk measure and suggests efficient + optimization techniques for its solving. Optimal asset + allocation considers: \enumerate{ \item Generation of + sample paths for the assets' rates of return. \item + Uncompounded cumulative portfolio rate of return rather + than compounded one. } +} +\examples{ +library(PerformanceAnalytics) +data(edhec) +CDDopt(edhec) +} +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan +} +\references{ + DRAWDOWN MEASURE IN PORTFOLIO + OPTIMIZATION,\emph{International Journal of Theoretical + and Applied Finance} ,Fall 1994, 49-58.Vol. 8, No. 1 + (2005) 13-58 +} +\keyword{Conditional} +\keyword{Drawdown} +\keyword{models} + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,52 @@ +\name{CalmarRatio.Norm} +\alias{CalmarRatio.Norm} +\title{Normalized Calmar reward/risk ratio} +\usage{ + CalmarRatio.Norm(R, tau = 1, scale = NA) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{scale}{number of periods in a year (daily scale = + 252, monthly scale = 12, quarterly scale = 4)} + + \item{excess}{for Sterling Ratio, excess amount to add to + the max drawdown, traditionally and default .1 (10\%)} +} +\description{ + Normalized Calmar and Sterling Ratios are yet another + method of creating a risk-adjusted measure for ranking + investments similar to the Sharpe Ratio. +} +\details{ + Both the Normalized Calmar and the Sterling ratio are the + ratio of annualized return over the absolute value of the + maximum drawdown of an investment. The Sterling ratio + adds an excess risk measure to the maximum drawdown, + traditionally and defaulting to 10%. + + It is also traditional to use a three year return series + for these calculations, although the functions included + here make no effort to determine the length of your + series. If you want to use a subset of your series, + you'll need to truncate or subset the input data to the + desired length. +} +\examples{ +data(managers) + CalmarRatio.Norm(managers[,1,drop=FALSE]) + CalmarRatio.Norm(managers[,1:6]) +} +\author{ + Brian G. Peterson , Peter Carl , Shubhankit Mohan +} +\references{ + Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, + Maximum drawdown. Risk Magazine, 01 Oct 2004. +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Normalized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Normalized.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Normalized.Rd 2013-08-20 11:46:18 UTC (rev 2834) @@ -0,0 +1,7 @@ +\name{SterlingRatio.Normalized} +\alias{SterlingRatio.Normalized} +\usage{ + SterlingRatio.Normalized(R, tau = 1, scale = NA, + excess = 0.1) +} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd 2013-08-20 11:31:43 UTC (rev 2833) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd 2013-08-20 11:46:18 UTC (rev 2834) @@ -1,13 +1,21 @@ -\name{CDrawdown} +\name{CDDOpt} +\alias{CDDOpt} \alias{CDrawdown} \title{Chekhlov Conditional Drawdown at Risk} \usage{ + CDDOpt(rmat, alpha = 0.05, rmin = 0, wmin = 0, wmax = 1, [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2834 From noreply at r-forge.r-project.org Tue Aug 20 13:56:03 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 13:56:03 +0200 (CEST) Subject: [Returnanalytics-commits] r2835 - in pkg/Meucci: . R man Message-ID: <20130820115603.AE1AF185A0D@r-forge.r-project.org> Author: xavierv Date: 2013-08-20 13:56:03 +0200 (Tue, 20 Aug 2013) New Revision: 2835 Added: pkg/Meucci/R/CovertCompoundedReturns2Price.R pkg/Meucci/man/ConvertCompoundedReturns2Price.Rd Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/NAMESPACE Log: - added ConvertCompoundedReturns2Price function Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-20 11:46:18 UTC (rev 2834) +++ pkg/Meucci/DESCRIPTION 2013-08-20 11:56:03 UTC (rev 2835) @@ -87,5 +87,6 @@ 'FitMultivariateGarch.R' 'MvnRnd.R' 'MleRecursionForStudentT.R' + 'CovertCompoundedReturns2Price.R' ' FitOrnsteinUhlenbeck.R' Modified: pkg/Meucci/NAMESPACE =================================================================== --- pkg/Meucci/NAMESPACE 2013-08-20 11:46:18 UTC (rev 2834) +++ pkg/Meucci/NAMESPACE 2013-08-20 11:56:03 UTC (rev 2835) @@ -7,6 +7,7 @@ export(ComputeMVE) export(CondProbViews) export(ConvertChangeInYield2Price) +export(ConvertCompoundedReturns2Price) export(Cumul2Raw) export(DetectOutliersViaMVE) export(EntropyProg) Added: pkg/Meucci/R/CovertCompoundedReturns2Price.R =================================================================== --- pkg/Meucci/R/CovertCompoundedReturns2Price.R (rev 0) +++ pkg/Meucci/R/CovertCompoundedReturns2Price.R 2013-08-20 11:56:03 UTC (rev 2835) @@ -0,0 +1,29 @@ +#' Convert compounded returns to prices for equity-like securities, as described in +#' A. Meucci "Risk and Asset Allocation", Springer, 2005 +#' +#' @param Exp_Comp_Rets : [vector] (N x 1) expected values of compounded returns +#' @param Cov_Comp_Rets : [matrix] (N x N) covariance matrix of compounded returns +#' @param Starting_Prices : [vector] (N x 1) +#' +#' @return Exp_Prices : [vector] (N x 1) expected values of prices +#' @return Cov_Prices : [matrix] (N x N) covariance matrix of prices +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See (6.77)-(6.79) in "Risk and Asset Allocation"-Springer (2005), by A. Meucci +#' See Meucci's script for "ConvertCompoundedReturns2Price.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + + +ConvertCompoundedReturns2Price = function(Exp_Comp_Rets, Cov_Comp_Rets, Starting_Prices) +{ + Mu = log(Starting_Prices) + Exp_Comp_Rets; + Sigma = Cov_Comp_Rets; + + Exp_Prices = exp( Mu + 0.5 * diag( Sigma ) ); + Cov_Prices = exp( Mu + 0.5 * diag( Sigma ) ) %*% t( exp( Mu + 0.5 * diag(Sigma) )) * ( exp( Sigma ) - 1 ); + + return( list( Exp_Prices = Exp_Prices, Cov_Prices = Cov_Prices ) ); +} \ No newline at end of file Added: pkg/Meucci/man/ConvertCompoundedReturns2Price.Rd =================================================================== --- pkg/Meucci/man/ConvertCompoundedReturns2Price.Rd (rev 0) +++ pkg/Meucci/man/ConvertCompoundedReturns2Price.Rd 2013-08-20 11:56:03 UTC (rev 2835) @@ -0,0 +1,37 @@ +\name{ConvertCompoundedReturns2Price} +\alias{ConvertCompoundedReturns2Price} +\title{Convert compounded returns to prices for equity-like securities, as described in +A. Meucci "Risk and Asset Allocation", Springer, 2005} +\usage{ + ConvertCompoundedReturns2Price(Exp_Comp_Rets, + Cov_Comp_Rets, Starting_Prices) +} +\arguments{ + \item{Exp_Comp_Rets}{: [vector] (N x 1) expected values + of compounded returns} + + \item{Cov_Comp_Rets}{: [matrix] (N x N) covariance matrix + of compounded returns} + + \item{Starting_Prices}{: [vector] (N x 1)} +} +\value{ + Exp_Prices : [vector] (N x 1) expected values of prices + + Cov_Prices : [matrix] (N x N) covariance matrix of prices +} +\description{ + Convert compounded returns to prices for equity-like + securities, as described in A. Meucci "Risk and Asset + Allocation", Springer, 2005 +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See (6.77)-(6.79) in + "Risk and Asset Allocation"-Springer (2005), by A. Meucci + See Meucci's script for + "ConvertCompoundedReturns2Price.m" +} + From noreply at r-forge.r-project.org Tue Aug 20 17:36:34 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 17:36:34 +0200 (CEST) Subject: [Returnanalytics-commits] r2836 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130820153634.297EE185C05@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-20 17:36:33 +0200 (Tue, 20 Aug 2013) New Revision: 2836 Added: pkg/PortfolioAnalytics/man/chart.Weights.Rd Modified: pkg/PortfolioAnalytics/DESCRIPTION pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.DE.R pkg/PortfolioAnalytics/R/charts.GenSA.R pkg/PortfolioAnalytics/R/charts.PSO.R pkg/PortfolioAnalytics/R/charts.ROI.R pkg/PortfolioAnalytics/R/charts.RP.R Log: adding generic method for chart.Weights Modified: pkg/PortfolioAnalytics/DESCRIPTION =================================================================== --- pkg/PortfolioAnalytics/DESCRIPTION 2013-08-20 11:56:03 UTC (rev 2835) +++ pkg/PortfolioAnalytics/DESCRIPTION 2013-08-20 15:36:33 UTC (rev 2836) @@ -50,3 +50,4 @@ 'applyFUN.R' 'charts.PSO.R' 'charts.GenSA.R' + 'chart.Weights.R' Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-20 11:56:03 UTC (rev 2835) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-20 15:36:33 UTC (rev 2836) @@ -10,9 +10,15 @@ export(chart.Scatter.RP) export(chart.Weights.DE) export(chart.Weights.GenSA) +export(chart.Weights.optimize.portfolio.DEoptim) +export(chart.Weights.optimize.portfolio.GenSA) +export(chart.Weights.optimize.portfolio.pso) +export(chart.Weights.optimize.portfolio.random) +export(chart.Weights.optimize.portfolio.ROI) export(chart.Weights.pso) export(chart.Weights.ROI) export(chart.Weights.RP) +export(chart.Weights) export(charts.DE) export(charts.GenSA) export(charts.pso) Modified: pkg/PortfolioAnalytics/R/charts.DE.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 11:56:03 UTC (rev 2835) +++ pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 15:36:33 UTC (rev 2836) @@ -10,35 +10,16 @@ # ############################################################################### -#' boxplot of the weights of the optimal portfolios -#' -#' Chart the optimal weights and upper and lower bounds on weights of a portfolio run via \code{\link{optimize.portfolio}} -#' -#' @param DE optimal portfolio object created by \code{\link{optimize.portfolio}} -#' @param neighbors set of 'neighbor' portfolios to overplot -#' @param las numeric in \{0,1,2,3\}; the style of axis labels -#' \describe{ -#' \item{0:}{always parallel to the axis [\emph{default}],} -#' \item{1:}{always horizontal,} -#' \item{2:}{always perpendicular to the axis,} -#' \item{3:}{always vertical.} -#' } -#' @param xlab a title for the x axis: see \code{\link{title}} -#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot lines -#' @param ... any other passthru parameters -#' @param main an overall title for the plot: see \code{\link{title}} -#' @seealso \code{\link{optimize.portfolio}} +#' @rdname chart.Weights #' @export -chart.Weights.DE <- function(DE, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ +chart.Weights.DE <- function(object, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ # Specific to the output of optimize.portfolio with optimize_method="DEoptim" - if(!inherits(DE, "optimize.portfolio.DEoptim")) stop("DE must be of class 'optimize.portfolio.DEoptim'") + if(!inherits(object, "optimize.portfolio.DEoptim")) stop("object must be of class 'optimize.portfolio.DEoptim'") - columnnames = names(DE$weights) + columnnames = names(object$weights) numassets = length(columnnames) - constraints <- get_constraints(DE$portfolio) + constraints <- get_constraints(object$portfolio) if(is.null(xlab)) minmargin = 3 @@ -57,12 +38,12 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(DE$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) + plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) # if(!is.null(neighbors)){ # if(is.vector(neighbors)){ - # xtract=extractStats(DE) + # xtract=extractStats(object) # weightcols<-grep('w\\.',colnames(xtract)) #need \\. to get the dot # if(length(neighbors)==1){ # # overplot nearby portfolios defined by 'out' @@ -84,12 +65,16 @@ # } # } - # points(DE$weights, type="b", col="blue", pch=16) + # points(object$weights, type="b", col="blue", pch=16) axis(2, cex.axis = cex.axis, col = element.color) axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) box(col = element.color) } +#' @rdname chart.Weights +#' @export +chart.Weights.optimize.portfolio.DEoptim <- chart.Weights.DE + #' classic risk return scatter of DEoptim results #' #' @param DE set of portfolios created by \code{\link{optimize.portfolio}} Modified: pkg/PortfolioAnalytics/R/charts.GenSA.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 11:56:03 UTC (rev 2835) +++ pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 15:36:33 UTC (rev 2836) @@ -1,34 +1,14 @@ -#' boxplot of the weights of the optimal portfolios -#' -#' Chart the optimal weights and upper and lower bounds on weights of a portfolio run via \code{\link{optimize.portfolio}} -#' -#' @param GenSA optimal portfolio object created by \code{\link{optimize.portfolio}} -#' @param neighbors set of 'neighbor' portfolios to overplot -#' @param las numeric in \{0,1,2,3\}; the style of axis labels -#' \describe{ -#' \item{0:}{always parallel to the axis [\emph{default}],} -#' \item{1:}{always horizontal,} -#' \item{2:}{always perpendicular to the axis,} -#' \item{3:}{always vertical.} -#' } -#' @param xlab a title for the x axis: see \code{\link{title}} -#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot lines -#' @param ... any other passthru parameters -#' @param main an overall title for the plot: see \code{\link{title}} -#' @seealso \code{\link{optimize.portfolio}} -#' @author Ross Bennett +#' @rdname chart.Weights #' @export -chart.Weights.GenSA <- function(GenSA, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ +chart.Weights.GenSA <- function(object, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ - if(!inherits(GenSA, "optimize.portfolio.GenSA")) stop("GenSA must be of class 'optimize.portfolio.GenSA'") + if(!inherits(object, "optimize.portfolio.GenSA")) stop("object must be of class 'optimize.portfolio.GenSA'") - columnnames = names(GenSA$weights) + columnnames = names(object$weights) numassets = length(columnnames) - constraints <- get_constraints(GenSA$portfolio) + constraints <- get_constraints(object$portfolio) if(is.null(xlab)) minmargin = 3 @@ -47,7 +27,7 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(GenSA$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) + plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) # if(!is.null(neighbors)){ @@ -79,6 +59,10 @@ box(col = element.color) } +#' @rdname chart.Weights +#' @export +chart.Weights.optimize.portfolio.GenSA <- chart.Weights.GenSA + #' classic risk return scatter of random portfolios #' #' The GenSA optimizer does not store the portfolio weights like DEoptim or random Modified: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 11:56:03 UTC (rev 2835) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 15:36:33 UTC (rev 2836) @@ -1,31 +1,14 @@ -#' boxplot of the weights in the portfolio -#' -#' @param pso object created by \code{\link{optimize.portfolio}} -#' @param neighbors set of 'neighbor' portfolios to overplot -#' @param las numeric in \{0,1,2,3\}; the style of axis labels -#' \describe{ -#' \item{0:}{always parallel to the axis [\emph{default}],} -#' \item{1:}{always horizontal,} -#' \item{2:}{always perpendicular to the axis,} -#' \item{3:}{always vertical.} -#' } -#' @param xlab a title for the x axis: see \code{\link{title}} -#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot lines -#' @param ... any other passthru parameters -#' @param main an overall title for the plot: see \code{\link{title}} -#' @seealso \code{\link{optimize.portfolio}} -#' @author Ross Bennett + +#' @rdname chart.Weights #' @export -chart.Weights.pso <- function(pso, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ +chart.Weights.pso <- function(object, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ - if(!inherits(pso, "optimize.portfolio.pso")) stop("pso must be of class 'optimize.portfolio.pso'") + if(!inherits(object, "optimize.portfolio.pso")) stop("object must be of class 'optimize.portfolio.pso'") - columnnames = names(pso$weights) + columnnames = names(object$weights) numassets = length(columnnames) - constraints <- get_constraints(pso$portfolio) + constraints <- get_constraints(object$portfolio) if(is.null(xlab)) minmargin = 3 @@ -44,7 +27,7 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(pso$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) + plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) # if(!is.null(neighbors)){ @@ -76,6 +59,9 @@ box(col = element.color) } +#' @rdname chart.Weights +#' @export +chart.Weights.optimize.portfolio.pso <- chart.Weights.pso #' classic risk return scatter of random portfolios #' Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 11:56:03 UTC (rev 2835) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 15:36:33 UTC (rev 2836) @@ -1,32 +1,14 @@ -#' boxplot of the weights in the portfolio -#' -#' @param ROI object created by \code{\link{optimize.portfolio}} -#' @param neighbors set of 'neighbor' portfolios to overplot -#' @param las numeric in \{0,1,2,3\}; the style of axis labels -#' \describe{ -#' \item{0:}{always parallel to the axis [\emph{default}],} -#' \item{1:}{always horizontal,} -#' \item{2:}{always perpendicular to the axis,} -#' \item{3:}{always vertical.} -#' } -#' @param xlab a title for the x axis: see \code{\link{title}} -#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot lines -#' @param ... any other passthru parameters -#' @param main an overall title for the plot: see \code{\link{title}} -#' @seealso \code{\link{optimize.portfolio}} -#' @author Ross Bennett +#' @rdname chart.Weights #' @export -chart.Weights.ROI <- function(ROI, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ +chart.Weights.ROI <- function(object, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ - if(!inherits(ROI, "optimize.portfolio.ROI")) stop("ROI must be of class 'optimize.portfolio.ROI'") + if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class 'optimize.portfolio.ROI'") - columnnames = names(ROI$weights) + columnnames = names(object$weights) numassets = length(columnnames) - constraints <- get_constraints(ROI$portfolio) + constraints <- get_constraints(object$portfolio) if(is.null(xlab)) minmargin = 3 @@ -45,12 +27,12 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(ROI$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) + plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) # if(!is.null(neighbors)){ # if(is.vector(neighbors)){ - # xtract=extractStats(ROI) + # xtract=extractStats(object) # weightcols<-grep('w\\.',colnames(xtract)) #need \\. to get the dot # if(length(neighbors)==1){ # # overplot nearby portfolios defined by 'out' @@ -71,12 +53,16 @@ # # also note the need for as.numeric. points() doesn't like matrix inputs # } # } - # points(ROI$weights, type="b", col="blue", pch=16) + # points(object$weights, type="b", col="blue", pch=16) axis(2, cex.axis = cex.axis, col = element.color) axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) box(col = element.color) } +#' @rdname chart.Weights +#' @export +chart.Weights.optimize.portfolio.ROI <- chart.Weights.ROI + #' classic risk return scatter of random portfolios #' #' The ROI optimizers do not store the portfolio weights like DEoptim or random Modified: pkg/PortfolioAnalytics/R/charts.RP.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-20 11:56:03 UTC (rev 2835) +++ pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-20 15:36:33 UTC (rev 2836) @@ -10,35 +10,19 @@ # ############################################################################### -#' boxplot of the weight distributions in the random portfolios -#' @param RP set of random portfolios created by \code{\link{optimize.portfolio}} -#' @param neighbors set of 'neighbor' portfolios to overplot -#' @param las numeric in \{0,1,2,3\}; the style of axis labels -#' \describe{ -#' \item{0:}{always parallel to the axis [\emph{default}],} -#' \item{1:}{always horizontal,} -#' \item{2:}{always perpendicular to the axis,} -#' \item{3:}{always vertical.} -#' } -#' @param xlab a title for the x axis: see \code{\link{title}} -#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot lines -#' @param ... any other passthru parameters -#' @param main an overall title for the plot: see \code{\link{title}} -#' @seealso \code{\link{optimize.portfolio}} +#' @rdname chart.Weights #' @export -chart.Weights.RP <- function(RP, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ +chart.Weights.RP <- function(object, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ # Specific to the output of the random portfolio code with constraints - # @TODO: check that RP is of the correct class + # @TODO: check that object is of the correct class # FIXED - if(!inherits(RP, "optimize.portfolio.random")){ - stop("RP must be of class 'optimize.portfolio.random'") + if(!inherits(object, "optimize.portfolio.random")){ + stop("object must be of class 'optimize.portfolio.random'") } - columnnames = names(RP$weights) + columnnames = names(object$weights) numassets = length(columnnames) - constraints <- get_constraints(RP$portfolio) + constraints <- get_constraints(object$portfolio) if(is.null(xlab)) minmargin = 3 @@ -57,12 +41,12 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(RP$random_portfolios[1,], type="b", col="orange", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, ...) + plot(object$random_portfolios[1,], type="b", col="orange", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, ...) points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) if(!is.null(neighbors)){ if(is.vector(neighbors)){ - xtract=extractStats(RP) + xtract=extractStats(object) weightcols<-grep('w\\.',colnames(xtract)) #need \\. to get the dot if(length(neighbors)==1){ # overplot nearby portfolios defined by 'out' @@ -84,13 +68,17 @@ } } - points(RP$random_portfolios[1,], type="b", col="orange", pch=16) # to overprint neighbors - points(RP$weights, type="b", col="blue", pch=16) + points(object$random_portfolios[1,], type="b", col="orange", pch=16) # to overprint neighbors + points(object$weights, type="b", col="blue", pch=16) axis(2, cex.axis = cex.axis, col = element.color) axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) box(col = element.color) } +#' @rdname chart.Weights +#' @export +chart.Weights.optimize.portfolio.random <- chart.Weights.RP + #' classic risk return scatter of random portfolios #' #' @param RP set of portfolios created by \code{\link{optimize.portfolio}} Added: pkg/PortfolioAnalytics/man/chart.Weights.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Weights.Rd 2013-08-20 15:36:33 UTC (rev 2836) @@ -0,0 +1,101 @@ +\name{chart.Weights.DE} +\alias{chart.Weights} +\alias{chart.Weights.DE} +\alias{chart.Weights.GenSA} +\alias{chart.Weights.optimize.portfolio.DEoptim} +\alias{chart.Weights.optimize.portfolio.GenSA} +\alias{chart.Weights.optimize.portfolio.pso} +\alias{chart.Weights.optimize.portfolio.random} +\alias{chart.Weights.optimize.portfolio.ROI} +\alias{chart.Weights.pso} +\alias{chart.Weights.ROI} +\alias{chart.Weights.RP} +\title{boxplot of the weights of the optimal portfolios} +\usage{ + chart.Weights.DE(object, neighbors = NULL, ..., + main = "Weights", las = 3, xlab = NULL, cex.lab = 1, + element.color = "darkgray", cex.axis = 0.8) + + chart.Weights.optimize.portfolio.DEoptim(object, + neighbors = NULL, ..., main = "Weights", las = 3, + xlab = NULL, cex.lab = 1, element.color = "darkgray", + cex.axis = 0.8) + + chart.Weights.RP(object, neighbors = NULL, ..., + main = "Weights", las = 3, xlab = NULL, cex.lab = 1, + element.color = "darkgray", cex.axis = 0.8) + + chart.Weights.optimize.portfolio.random(object, + neighbors = NULL, ..., main = "Weights", las = 3, + xlab = NULL, cex.lab = 1, element.color = "darkgray", + cex.axis = 0.8) + + chart.Weights.ROI(object, neighbors = NULL, ..., + main = "Weights", las = 3, xlab = NULL, cex.lab = 1, + element.color = "darkgray", cex.axis = 0.8) + + chart.Weights.optimize.portfolio.ROI(object, + neighbors = NULL, ..., main = "Weights", las = 3, + xlab = NULL, cex.lab = 1, element.color = "darkgray", + cex.axis = 0.8) + + chart.Weights.pso(object, neighbors = NULL, ..., + main = "Weights", las = 3, xlab = NULL, cex.lab = 1, + element.color = "darkgray", cex.axis = 0.8) + + chart.Weights.optimize.portfolio.pso(object, + neighbors = NULL, ..., main = "Weights", las = 3, + xlab = NULL, cex.lab = 1, element.color = "darkgray", + cex.axis = 0.8) + + chart.Weights.GenSA(object, neighbors = NULL, ..., + main = "Weights", las = 3, xlab = NULL, cex.lab = 1, + element.color = "darkgray", cex.axis = 0.8) + + chart.Weights.optimize.portfolio.GenSA(object, + neighbors = NULL, ..., main = "Weights", las = 3, + xlab = NULL, cex.lab = 1, element.color = "darkgray", + cex.axis = 0.8) + + chart.Weights(object, neighbors = NULL, ..., + main = "Weights", las = 3, xlab = NULL, cex.lab = 1, + element.color = "darkgray", cex.axis = 0.8) +} +\arguments{ + \item{object}{optimal portfolio object created by + \code{\link{optimize.portfolio}}} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot} + + \item{las}{numeric in \{0,1,2,3\}; the style of axis + labels \describe{ \item{0:}{always parallel to the axis + [\emph{default}],} \item{1:}{always horizontal,} + \item{2:}{always perpendicular to the axis,} + \item{3:}{always vertical.} }} + + \item{xlab}{a title for the x axis: see + \code{\link{title}}} + + \item{cex.lab}{The magnification to be used for x and y + labels relative to the current setting of \code{cex}} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot lines} + + \item{...}{any other passthru parameters} + + \item{main}{an overall title for the plot: see + \code{\link{title}}} +} +\description{ + Chart the optimal weights and upper and lower bounds on + weights of a portfolio run via + \code{\link{optimize.portfolio}} +} +\seealso{ + \code{\link{optimize.portfolio}} +} + From noreply at r-forge.r-project.org Tue Aug 20 19:05:15 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 19:05:15 +0200 (CEST) Subject: [Returnanalytics-commits] r2837 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130820170516.04BE118589A@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-20 19:05:15 +0200 (Tue, 20 Aug 2013) New Revision: 2837 Added: pkg/PortfolioAnalytics/R/chart.RiskReward.R pkg/PortfolioAnalytics/R/chart.Weights.R pkg/PortfolioAnalytics/man/chart.RiskReward.Rd Modified: pkg/PortfolioAnalytics/DESCRIPTION pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.DE.R pkg/PortfolioAnalytics/R/charts.GenSA.R pkg/PortfolioAnalytics/R/charts.PSO.R pkg/PortfolioAnalytics/R/charts.ROI.R pkg/PortfolioAnalytics/R/charts.RP.R pkg/PortfolioAnalytics/man/chart.Scatter.DE.Rd pkg/PortfolioAnalytics/man/chart.Scatter.ROI.Rd pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd pkg/PortfolioAnalytics/man/chart.Scatter.pso.Rd Log: adding chart.RiskReward as a generic function Modified: pkg/PortfolioAnalytics/DESCRIPTION =================================================================== --- pkg/PortfolioAnalytics/DESCRIPTION 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/DESCRIPTION 2013-08-20 17:05:15 UTC (rev 2837) @@ -51,3 +51,4 @@ 'charts.PSO.R' 'charts.GenSA.R' 'chart.Weights.R' + 'chart.RiskReward.R' Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-20 17:05:15 UTC (rev 2837) @@ -3,6 +3,12 @@ export(applyFUN) export(box_constraint) export(CCCgarch.MM) +export(chart.RiskReward.optimize.portfolio.DEoptim) +export(chart.RiskReward.optimize.portfolio.GenSA) +export(chart.RiskReward.optimize.portfolio.pso) +export(chart.RiskReward.optimize.portfolio.random) +export(chart.RiskReward.optimize.portfolio.ROI) +export(chart.RiskReward) export(chart.Scatter.DE) export(chart.Scatter.GenSA) export(chart.Scatter.pso) Added: pkg/PortfolioAnalytics/R/chart.RiskReward.R =================================================================== --- pkg/PortfolioAnalytics/R/chart.RiskReward.R (rev 0) +++ pkg/PortfolioAnalytics/R/chart.RiskReward.R 2013-08-20 17:05:15 UTC (rev 2837) @@ -0,0 +1,19 @@ + + +#' classic risk reward scatter +#' +#' @param object optimal portfolio created by \code{\link{optimize.portfolio}} +#' @param neighbors set of 'neighbor' portfolios to overplot, see Details in \code{\link{charts.DE}} +#' @param ... any other passthru parameters +#' @param rp TRUE/FALSE. If TRUE, random portfolios are generated by \code{\link{random_portfolios}} to view the feasible space +#' @param return.col string matching the objective of a 'return' objective, on vertical axis +#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot scatter points +#' @seealso \code{\link{optimize.portfolio}} +#' @export +chart.RiskReward <- function(object, neighbors, ..., rp=FALSE, return.col="mean", risk.col="ES", element.color = "darkgray", cex.axis=0.8){ + UseMethod("chart.RiskReward") +} + + Added: pkg/PortfolioAnalytics/R/chart.Weights.R =================================================================== --- pkg/PortfolioAnalytics/R/chart.Weights.R (rev 0) +++ pkg/PortfolioAnalytics/R/chart.Weights.R 2013-08-20 17:05:15 UTC (rev 2837) @@ -0,0 +1,26 @@ + +#' boxplot of the weights of the optimal portfolios +#' +#' Chart the optimal weights and upper and lower bounds on weights of a portfolio run via \code{\link{optimize.portfolio}} +#' +#' @param object optimal portfolio object created by \code{\link{optimize.portfolio}} +#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param las numeric in \{0,1,2,3\}; the style of axis labels +#' \describe{ +#' \item{0:}{always parallel to the axis [\emph{default}],} +#' \item{1:}{always horizontal,} +#' \item{2:}{always perpendicular to the axis,} +#' \item{3:}{always vertical.} +#' } +#' @param xlab a title for the x axis: see \code{\link{title}} +#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot lines +#' @param ... any other passthru parameters +#' @param main an overall title for the plot: see \code{\link{title}} +#' @seealso \code{\link{optimize.portfolio}} +#' @export +chart.Weights <- function(object, neighbors = NULL, ..., main="Weights", las = 3, xlab=NULL, cex.lab = 1, element.color = "darkgray", cex.axis=0.8){ + UseMethod("chart.Weights") +} + Modified: pkg/PortfolioAnalytics/R/charts.DE.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 17:05:15 UTC (rev 2837) @@ -75,27 +75,18 @@ #' @export chart.Weights.optimize.portfolio.DEoptim <- chart.Weights.DE -#' classic risk return scatter of DEoptim results -#' -#' @param DE set of portfolios created by \code{\link{optimize.portfolio}} -#' @param neighbors set of 'neighbor' portfolios to overplot, see Details in \code{\link{charts.DE}} -#' @param return.col string matching the objective of a 'return' objective, on vertical axis -#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis -#' @param ... any other passthru parameters -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot scatter points -#' @seealso \code{\link{optimize.portfolio}} +#' @rdname chart.RiskReward #' @export -chart.Scatter.DE <- function(DE, neighbors = NULL, return.col='mean', risk.col='ES', ..., element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.DE <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', element.color = "darkgray", cex.axis=0.8){ # more or less specific to the output of the DEoptim portfolio code with constraints # will work to a point with other functions, such as optimize.porfolio.parallel # there's still a lot to do to improve this. - if(!inherits(DE, "optimize.portfolio.DEoptim")) stop("DE must be of class 'optimize.portfolio.DEoptim'") + if(!inherits(object, "optimize.portfolio.DEoptim")) stop("object must be of class 'optimize.portfolio.DEoptim'") - R <- DE$R - portfolio <- DE$portfolio - xtract = extractStats(DE) + R <- object$R + portfolio <- object$portfolio + xtract = extractStats(object) columnnames = colnames(xtract) #return.column = grep(paste("objective_measures",return.col,sep='.'),columnnames) return.column = pmatch(return.col,columnnames) @@ -176,8 +167,8 @@ } # points(xtract[1,risk.column],xtract[1,return.column], col="orange", pch=16) # overplot the equal weighted (or seed) - #check to see if portfolio 1 is EW DE$random_portoflios[1,] all weights should be the same - # if(!isTRUE(all.equal(DE$random_portfolios[1,][1],1/length(DE$random_portfolios[1,]),check.attributes=FALSE))){ + #check to see if portfolio 1 is EW object$random_portoflios[1,] all weights should be the same + # if(!isTRUE(all.equal(object$random_portfolios[1,][1],1/length(object$random_portfolios[1,]),check.attributes=FALSE))){ #show both the seed and EW if they are different #NOTE the all.equal comparison could fail above if the first element of the first portfolio is the same as the EW weight, #but the rest is not, shouldn't happen often with real portfolios, only toy examples @@ -186,7 +177,7 @@ ## Draw solution trajectory if(!is.null(R) & !is.null(portfolio)){ - w.traj = unique(DE$DEoutput$member$bestmemit) + w.traj = unique(object$DEoutput$member$bestmemit) rows = nrow(w.traj) rr = matrix(nrow=rows, ncol=2) ## maybe rewrite as an apply statement by row on w.traj @@ -227,12 +218,12 @@ ## @TODO: Generalize this to find column containing the "risk" metric - if(length(names(DE)[which(names(DE)=='constrained_objective')])) { + if(length(names(object)[which(names(object)=='constrained_objective')])) { result.slot<-'constrained_objective' } else { result.slot<-'objective_measures' } - objcols<-unlist(DE[[result.slot]]) + objcols<-unlist(object[[result.slot]]) names(objcols)<-name.replace(names(objcols)) return.column = pmatch(return.col,names(objcols)) if(is.na(return.column)) { @@ -250,7 +241,7 @@ return.col <- gsub("\\..*", "", return.col) risk.col <- gsub("\\..*", "", risk.col) # warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') - opt_weights <- DE$weights + opt_weights <- object$weights ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) points(risk, ret, col="blue", pch=16) #optimal @@ -262,6 +253,9 @@ box(col = element.color) } +#' @rdname chart.RiskReward +#' @export +chart.RiskReward.optimize.portfolio.DEoptim <- chart.Scatter.DE #' scatter and weights chart for random portfolios #' @@ -290,9 +284,9 @@ op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.DE(DE, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) + chart.Scatter.DE(object=DE, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) par(mar=c(2,4,0,2)) - chart.Weights.DE(DE, main="", neighbors=neighbors, ...) + chart.Weights.DE(object=DE, main="", neighbors=neighbors, ...) par(op) } Modified: pkg/PortfolioAnalytics/R/charts.GenSA.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 17:05:15 UTC (rev 2837) @@ -63,38 +63,22 @@ #' @export chart.Weights.optimize.portfolio.GenSA <- chart.Weights.GenSA -#' classic risk return scatter of random portfolios -#' -#' The GenSA optimizer does not store the portfolio weights like DEoptim or random -#' portfolios so we will generate random portfolios for the scatter plot. -#' -#' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights -#' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights -#' -#' @param GenSA object created by \code{\link{optimize.portfolio}} -#' @param rp set of weights generated by \code{\link{random_portfolio}} -#' @param return.col string matching the objective of a 'return' objective, on vertical axis -#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis -#' @param ... any other passthru parameters -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot scatter points -#' @seealso \code{\link{optimize.portfolio}} -#' @author Ross Bennett +#' @rdname chart.RiskReward #' @export -chart.Scatter.GenSA <- function(GenSA, rp=NULL, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ +chart.Scatter.GenSA <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", element.color="darkgray", cex.axis=0.8){ - if(!inherits(GenSA, "optimize.portfolio.GenSA")) stop("GenSA must be of class 'optimize.portfolio.GenSA'") + if(!inherits(object, "optimize.portfolio.GenSA")) stop("object must be of class 'optimize.portfolio.GenSA'") - R <- GenSA$R + R <- object$R # If the user does not pass in rp, then we will generate random portfolios - if(is.null(rp)){ + if(rp){ permutations <- match.call(expand.dots=TRUE)$permutations if(is.null(permutations)) permutations <- 2000 - rp <- random_portfolios(portfolio=GenSA$portfolio, permutations=permutations) + rp <- random_portfolios(portfolio=object$portfolio, permutations=permutations) } # Get the optimal weights from the output of optimize.portfolio - wts <- GenSA$weights + wts <- object$weights # cbind the optimal weights and random portfolio weights rp <- rbind(wts, rp) @@ -102,13 +86,17 @@ returnpoints <- applyFUN(R=R, weights=rp, FUN=return.col, ...=...) riskpoints <- applyFUN(R=R, weights=rp, FUN=risk.col, ...=...) - plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, main=main) + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) } +#' @rdname chart.RiskReward +#' @export +chart.RiskReward.optimize.portfolio.GenSA <- chart.Scatter.GenSA + #' scatter and weights chart for portfolios #' #' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights @@ -126,15 +114,15 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.GenSA <- function(GenSA, rp=NULL, return.col="mean", risk.col="StdDev", +charts.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ # Specific to the output of the optimize_method=GenSA op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,2),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + chart.Scatter.GenSA(object=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) par(mar=c(2,4,0,2)) - chart.Weights.GenSA(GenSA=GenSA, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") + chart.Weights.GenSA(object=GenSA, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") par(op) } @@ -155,6 +143,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.GenSA <- function(GenSA, rp=NULL, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ +plot.optimize.portfolio.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ charts.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 17:05:15 UTC (rev 2837) @@ -63,27 +63,14 @@ #' @export chart.Weights.optimize.portfolio.pso <- chart.Weights.pso -#' classic risk return scatter of random portfolios -#' -#' -#' \code{return.col} must be the name of a function used to compute the return metric on the portfolio weights -#' \code{risk.col} must be the name of a function used to compute the risk metric on the portfolio weights -#' -#' @param pso object created by \code{\link{optimize.portfolio}} -#' @param return.col string matching the objective of a 'return' objective, on vertical axis -#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis -#' @param ... any other passthru parameters -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot scatter points -#' @seealso \code{\link{optimize.portfolio}} -#' @author Ross Bennett +#' @rdname chart.RiskReward #' @export -chart.Scatter.pso <- function(pso, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ - if(!inherits(pso, "optimize.portfolio.pso")) stop("pso must be of class 'optimize.portfolio.pso'") - R <- pso$R +chart.Scatter.pso <- function(object, neighbors=NULL, ..., return.col="mean", risk.col="ES", element.color = "darkgray", cex.axis=0.8){ + if(!inherits(object, "optimize.portfolio.pso")) stop("object must be of class 'optimize.portfolio.pso'") + R <- object$R # Object with the "out" value in the first column and the normalized weights # The first row is the optimal "out" value and the optimal weights - tmp <- extractStats(pso) + tmp <- extractStats(object) # Get the weights wts <- tmp[,-1] @@ -91,13 +78,17 @@ returnpoints <- applyFUN(R=R, weights=wts, FUN=return.col, ...=...) riskpoints <- applyFUN(R=R, weights=wts, FUN=risk.col, ...=...) - plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, main=main) + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) } +#' @rdname chart.RiskReward +#' @export +chart.RiskReward.optimize.portfolio.pso <- chart.Scatter.pso + #' scatter and weights chart for portfolios #' #' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights @@ -120,9 +111,9 @@ op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,2),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.pso(pso=pso, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + chart.Scatter.pso(object=pso, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) par(mar=c(2,4,0,2)) - chart.Weights.pso(pso=pso, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") + chart.Weights.pso(object=pso, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") par(op) } Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 17:05:15 UTC (rev 2837) @@ -63,38 +63,22 @@ #' @export chart.Weights.optimize.portfolio.ROI <- chart.Weights.ROI -#' classic risk return scatter of random portfolios -#' -#' The ROI optimizers do not store the portfolio weights like DEoptim or random -#' portfolios so we will generate random portfolios for the scatter plot. -#' -#' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights -#' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights -#' -#' @param ROI object created by \code{\link{optimize.portfolio}} -#' @param rp matrix of random portfolios generated by \code{\link{random_portfolio}} -#' @param return.col string matching the objective of a 'return' objective, on vertical axis -#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis -#' @param ... any other passthru parameters -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot scatter points -#' @seealso \code{\link{optimize.portfolio}} -#' @author Ross Bennett +#' @rdname chart.RiskReward #' @export -chart.Scatter.ROI <- function(ROI, rp=NULL, return.col="mean", risk.col="StdDev", ..., element.color = "darkgray", cex.axis=0.8, main=""){ +chart.Scatter.ROI <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", element.color = "darkgray", cex.axis=0.8){ - if(!inherits(ROI, "optimize.portfolio.ROI")) stop("ROI must be of class 'optimize.portfolio.ROI'") + if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class 'optimize.portfolio.ROI'") - R <- ROI$R + R <- object$R # If the user does not pass in rp, then we will generate random portfolios - if(is.null(rp)){ + if(rp){ permutations <- match.call(expand.dots=TRUE)$permutations if(is.null(permutations)) permutations <- 2000 - rp <- random_portfolios(portfolio=ROI$portfolio, permutations=permutations) + rp <- random_portfolios(portfolio=object$portfolio, permutations=permutations) } # Get the optimal weights from the output of optimize.portfolio - wts <- ROI$weights + wts <- object$weights # cbind the optimal weights and random portfolio weights rp <- rbind(wts, rp) @@ -102,13 +86,17 @@ returnpoints <- applyFUN(R=R, weights=rp, FUN=return.col, ...=...) riskpoints <- applyFUN(R=R, weights=rp, FUN=risk.col, ...=...) - plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, main=main) + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) } +#' @rdname chart.RiskReward +#' @export +chart.RiskReward.optimize.portfolio.ROI <- chart.Scatter.ROI + #' scatter and weights chart for portfolios #' #' The ROI optimizers do not store the portfolio weights like DEoptim or random @@ -129,7 +117,7 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.ROI <- function(ROI, rp=NULL, risk.col="StdDev", return.col="mean", +charts.ROI <- function(ROI, rp=FALSE, risk.col="StdDev", return.col="mean", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ # Specific to the output of the optimize_method=ROI R <- ROI$R @@ -162,6 +150,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.ROI <- function(ROI, rp=NULL, risk.col="StdDev", return.col="mean", element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ +plot.optimize.portfolio.ROI <- function(ROI, rp=FALSE, risk.col="StdDev", return.col="mean", element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ charts.ROI(ROI=ROI, rp=rp, risk.col=risk.col, return.col=return.col, main=main, ...) } Modified: pkg/PortfolioAnalytics/R/charts.RP.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-20 17:05:15 UTC (rev 2837) @@ -79,26 +79,17 @@ #' @export chart.Weights.optimize.portfolio.random <- chart.Weights.RP -#' classic risk return scatter of random portfolios -#' -#' @param RP set of portfolios created by \code{\link{optimize.portfolio}} -#' @param neighbors set of 'neighbor' portfolios to overplot, see Details -#' @param return.col string matching the objective of a 'return' objective, on vertical axis -#' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis -#' @param ... any other passthru parameters -#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} -#' @param element.color color for the default plot scatter points -#' @seealso \code{\link{optimize.portfolio}} +#' @rdname chart.RiskReward #' @export -chart.Scatter.RP <- function(RP, neighbors = NULL, return.col='mean', risk.col='ES', ..., element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.RP <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', element.color = "darkgray", cex.axis=0.8){ # more or less specific to the output of the random portfolio code with constraints # will work to a point with other functions, such as optimize.porfolio.parallel # there's still a lot to do to improve this. - if(!inherits(RP, "optimize.portfolio.random")){ - stop("RP must be of class 'optimize.portfolio.random'") + if(!inherits(object, "optimize.portfolio.random")){ + stop("object must be of class 'optimize.portfolio.random'") } - R <- RP$R - xtract = extractStats(RP) + R <- object$R + xtract = extractStats(object) columnnames = colnames(xtract) #return.column = grep(paste("objective_measures",return.col,sep='.'),columnnames) return.column = pmatch(return.col,columnnames) @@ -179,20 +170,20 @@ } points(xtract[1,risk.column],xtract[1,return.column], col="orange", pch=16) # overplot the equal weighted (or seed) - #check to see if portfolio 1 is EW RP$random_portoflios[1,] all weights should be the same - if(!isTRUE(all.equal(RP$random_portfolios[1,][1],1/length(RP$random_portfolios[1,]),check.attributes=FALSE))){ + #check to see if portfolio 1 is EW object$random_portoflios[1,] all weights should be the same + if(!isTRUE(all.equal(object$random_portfolios[1,][1],1/length(object$random_portfolios[1,]),check.attributes=FALSE))){ #show both the seed and EW if they are different #NOTE the all.equal comparison could fail above if the first element of the first portfolio is the same as the EW weight, #but the rest is not, shouldn't happen often with real portfolios, only toy examples points(xtract[2,risk.column],xtract[2,return.column], col="green", pch=16) # overplot the equal weighted (or seed) } ## @TODO: Generalize this to find column containing the "risk" metric - if(length(names(RP)[which(names(RP)=='constrained_objective')])) { + if(length(names(object)[which(names(object)=='constrained_objective')])) { result.slot<-'constrained_objective' } else { result.slot<-'objective_measures' } - objcols<-unlist(RP[[result.slot]]) + objcols<-unlist(object[[result.slot]]) names(objcols)<-PortfolioAnalytics:::name.replace(names(objcols)) return.column = pmatch(return.col,names(objcols)) if(is.na(return.column)) { @@ -210,7 +201,7 @@ return.col <- gsub("\\..*", "", return.col) risk.col <- gsub("\\..*", "", risk.col) # warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') - opt_weights <- RP$weights + opt_weights <- object$weights ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) points(risk, ret, col="blue", pch=16) #optimal @@ -222,6 +213,10 @@ box(col = element.color) } +#' @rdname chart.RiskReward +#' @export +chart.RiskReward.optimize.portfolio.random <- chart.Scatter.RP + #' scatter and weights chart for random portfolios #' #' \code{neighbors} may be specified in three ways. @@ -243,16 +238,15 @@ #' \code{\link{optimize.portfolio}} #' \code{\link{extractStats}} #' @export -charts.RP <- function(RP, R=NULL, risk.col, return.col, - neighbors=NULL, main="Random.Portfolios", ...){ +charts.RP <- function(RP, risk.col, return.col, neighbors=NULL, main="Random.Portfolios", ...){ # Specific to the output of the random portfolio code with constraints # @TODO: check that RP is of the correct class op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.RP(RP, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) + chart.Scatter.RP(object=RP, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) par(mar=c(2,4,0,2)) - chart.Weights.RP(RP, main="", neighbors=neighbors, ...) + chart.Weights.RP(object=RP, main="", neighbors=neighbors, ...) par(op) } Added: pkg/PortfolioAnalytics/man/chart.RiskReward.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.RiskReward.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.RiskReward.Rd 2013-08-20 17:05:15 UTC (rev 2837) @@ -0,0 +1,95 @@ +\name{chart.Scatter.DE} +\alias{chart.RiskReward} +\alias{chart.RiskReward.optimize.portfolio.DEoptim} +\alias{chart.RiskReward.optimize.portfolio.GenSA} +\alias{chart.RiskReward.optimize.portfolio.pso} +\alias{chart.RiskReward.optimize.portfolio.random} +\alias{chart.RiskReward.optimize.portfolio.ROI} +\alias{chart.Scatter.DE} +\alias{chart.Scatter.GenSA} +\alias{chart.Scatter.pso} +\alias{chart.Scatter.ROI} +\alias{chart.Scatter.RP} +\title{classic risk reward scatter} +\usage{ + chart.Scatter.DE(object, neighbors = NULL, ..., + return.col = "mean", risk.col = "ES", + element.color = "darkgray", cex.axis = 0.8) + + chart.RiskReward.optimize.portfolio.DEoptim(object, + neighbors = NULL, ..., return.col = "mean", + risk.col = "ES", element.color = "darkgray", + cex.axis = 0.8) + + chart.Scatter.RP(object, neighbors = NULL, ..., + return.col = "mean", risk.col = "ES", + element.color = "darkgray", cex.axis = 0.8) + + chart.RiskReward.optimize.portfolio.random(object, + neighbors = NULL, ..., return.col = "mean", + risk.col = "ES", element.color = "darkgray", + cex.axis = 0.8) + + chart.Scatter.ROI(object, neighbors = NULL, ..., + rp = FALSE, return.col = "mean", risk.col = "ES", + element.color = "darkgray", cex.axis = 0.8) + + chart.RiskReward.optimize.portfolio.ROI(object, + neighbors = NULL, ..., rp = FALSE, return.col = "mean", + risk.col = "ES", element.color = "darkgray", + cex.axis = 0.8) + + chart.Scatter.pso(object, neighbors = NULL, ..., + return.col = "mean", risk.col = "ES", + element.color = "darkgray", cex.axis = 0.8) + + chart.RiskReward.optimize.portfolio.pso(object, + neighbors = NULL, ..., return.col = "mean", + risk.col = "ES", element.color = "darkgray", + cex.axis = 0.8) + + chart.Scatter.GenSA(object, neighbors = NULL, ..., + rp = FALSE, return.col = "mean", risk.col = "ES", + element.color = "darkgray", cex.axis = 0.8) + + chart.RiskReward.optimize.portfolio.GenSA(object, + neighbors = NULL, ..., rp = FALSE, return.col = "mean", + risk.col = "ES", element.color = "darkgray", + cex.axis = 0.8) + + chart.RiskReward(object, neighbors, ..., rp = FALSE, + return.col = "mean", risk.col = "ES", + element.color = "darkgray", cex.axis = 0.8) +} +\arguments{ + \item{object}{optimal portfolio created by + \code{\link{optimize.portfolio}}} + + \item{neighbors}{set of 'neighbor' portfolios to + overplot, see Details in \code{\link{charts.DE}}} + + \item{...}{any other passthru parameters} + + \item{rp}{TRUE/FALSE. If TRUE, random portfolios are + generated by \code{\link{random_portfolios}} to view the + feasible space} + + \item{return.col}{string matching the objective of a + 'return' objective, on vertical axis} + + \item{risk.col}{string matching the objective of a 'risk' + objective, on horizontal axis} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot scatter + points} +} +\description{ + classic risk reward scatter +} +\seealso{ + \code{\link{optimize.portfolio}} +} + Modified: pkg/PortfolioAnalytics/man/chart.Scatter.DE.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Scatter.DE.Rd 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/man/chart.Scatter.DE.Rd 2013-08-20 17:05:15 UTC (rev 2837) @@ -2,8 +2,8 @@ \alias{chart.Scatter.DE} \title{classic risk return scatter of DEoptim results} \usage{ - chart.Scatter.DE(DE, neighbors = NULL, - return.col = "mean", risk.col = "ES", ..., + chart.Scatter.DE(DE, neighbors = NULL, ..., + return.col = "mean", risk.col = "ES", element.color = "darkgray", cex.axis = 0.8) } \arguments{ Modified: pkg/PortfolioAnalytics/man/chart.Scatter.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Scatter.ROI.Rd 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/man/chart.Scatter.ROI.Rd 2013-08-20 17:05:15 UTC (rev 2837) @@ -2,9 +2,9 @@ \alias{chart.Scatter.ROI} \title{classic risk return scatter of random portfolios} \usage{ - chart.Scatter.ROI(ROI, rp = NULL, return.col = "mean", - risk.col = "StdDev", ..., element.color = "darkgray", - cex.axis = 0.8, main = "") + chart.Scatter.ROI(ROI, neighbors = NULL, ..., rp = FALSE, + return.col = "mean", risk.col = "ES", + element.color = "darkgray", cex.axis = 0.8) } \arguments{ \item{ROI}{object created by Modified: pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd 2013-08-20 15:36:33 UTC (rev 2836) +++ pkg/PortfolioAnalytics/man/chart.Scatter.RP.Rd 2013-08-20 17:05:15 UTC (rev 2837) @@ -2,8 +2,8 @@ \alias{chart.Scatter.RP} \title{classic risk return scatter of random portfolios} \usage{ - chart.Scatter.RP(RP, neighbors = NULL, - return.col = "mean", risk.col = "ES", ..., + chart.Scatter.RP(RP, neighbors = NULL, ..., + return.col = "mean", risk.col = "ES", element.color = "darkgray", cex.axis = 0.8) } \arguments{ Modified: pkg/PortfolioAnalytics/man/chart.Scatter.pso.Rd [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2837 From noreply at r-forge.r-project.org Tue Aug 20 19:10:09 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 19:10:09 +0200 (CEST) Subject: [Returnanalytics-commits] r2838 - pkg/PortfolioAnalytics/R Message-ID: <20130820171010.0006018589A@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-20 19:10:09 +0200 (Tue, 20 Aug 2013) New Revision: 2838 Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R Log: returning the returns object, R, in optimize.portfolio if trace=TRUE. Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-20 17:05:15 UTC (rev 2837) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-20 17:10:09 UTC (rev 2838) @@ -831,7 +831,7 @@ # print(c("elapsed time:",round(end_t-start_t,2),":diff:",round(diff,2), ":stats: ", round(out$stats,4), ":targets:",out$targets)) if(message) message(c("elapsed time:", end_t-start_t)) out$portfolio <- portfolio - out$R <- R + if(trace) out$R <- R out$data_summary <- list(first=first(R), last=last(R)) out$elapsed_time <- end_t - start_t out$end_t <- as.character(Sys.time()) From noreply at r-forge.r-project.org Tue Aug 20 20:44:57 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 20:44:57 +0200 (CEST) Subject: [Returnanalytics-commits] r2839 - in pkg/FactorAnalytics: R vignettes Message-ID: <20130820184457.51E5B185C06@r-forge.r-project.org> Author: chenyian Date: 2013-08-20 20:44:57 +0200 (Tue, 20 Aug 2013) New Revision: 2839 Modified: pkg/FactorAnalytics/R/plot.StatFactorModel.r pkg/FactorAnalytics/R/plot.TimeSeriesFactorModel.r pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: add statistical factor model section in vignette. Modified: pkg/FactorAnalytics/R/plot.StatFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.StatFactorModel.r 2013-08-20 17:10:09 UTC (rev 2838) +++ pkg/FactorAnalytics/R/plot.StatFactorModel.r 2013-08-20 18:44:57 UTC (rev 2839) @@ -56,408 +56,405 @@ #' @method plot StatFactorModel #' @export plot.StatFactorModel <- -function(x, variables, cumulative = TRUE, style = "bar", - which.plot = c("none","1L","2L","3L","4L","5L","6L","7L","8L"), - hgrid = FALSE, vgrid = FALSE,plot.single=FALSE, asset.name, - which.plot.single=c("none","1L","2L","3L","4L","5L","6L", - "7L","8L","9L","10L","11L","12L","13L"), - max.show=6, VaR.method = "historical",...) -{ - require(strucchange) - require(ellipse) - # - # beginning of funciton screenplot - # - screeplot<- - function(mf, variables, cumulative = TRUE, style = "bar", main = "", ...) + function(x, variables, cumulative = TRUE, style = "bar", + which.plot = c("none","1L","2L","3L","4L","5L","6L","7L","8L"), + hgrid = FALSE, vgrid = FALSE,plot.single=FALSE, asset.name, + which.plot.single=c("none","1L","2L","3L","4L","5L","6L", + "7L","8L","9L","10L","11L","12L","13L"), + max.show=6, VaR.method = "historical",...) { - vars <- mf$eigen - n90 <- which(cumsum(vars)/sum(vars) > 0.9)[1] - if(missing(variables)) { - variables <- 1:max(mf$k, min(10, n90)) - } - istyle <- charmatch(style, c("bar", "lines"), nomatch = NA) - if(is.na(istyle) || istyle <= 1) - style <- "bar" - else { - style <- "lines" - } - if(style == "bar") { - loc <- barplot(vars[variables], names = paste("F", variables, - sep = "."), main = main, ylab = "Variances", ...) - } - else { - loc <- 1:length(variables) - plot(loc, vars[variables], type = "b", axes = F, main = main, - ylab = "Variances", xlab = "") - axis(2) - axis(1, at = loc, labels = paste("F", variables, sep = ".")) - } - if(cumulative) { - cumv <- (cumsum(vars)/sum(vars))[variables] - text(loc, vars[variables] + par("cxy")[2], as.character(signif( - cumv, 3))) - } - invisible(loc) - } - # - # end of screenplot - # - - if (plot.single==TRUE) { - ## inputs: - ## x lm object summarizing factor model fit. It is assumed that - ## time series date information is included in the names component - ## of the residuals, fitted and model components of the object. - ## asset.name charater. The name of the single asset to be ploted. - ## which.plot.single integer indicating which plot to create: - ## 1 time series plot of actual and fitted values - ## 2 time series plot of residuals with standard error bands - ## 3 time series plot of squared residuals - ## 4 time series plot of absolute residuals - ## 5 SACF and PACF of residuals - ## 6 SACF and PACF of squared residuals - ## 7 SACF and PACF of absolute residuals - ## 8 histogram of residuals with normal curve overlayed - ## 9 normal qq-plot of residuals - ## 10 CUSUM plot of recursive residuals - ## 11 CUSUM plot of OLS residuals - ## 12 CUSUM plot of recursive estimates relative to full sample estimates - ## 13 rolling estimates over 24 month window - which.plot.single<-which.plot.single[1] + require(strucchange) + require(ellipse) + # + # beginning of funciton screenplot + # + screeplot<- + function(mf, variables, cumulative = TRUE, style = "bar", main = "", ...) + { + vars <- mf$eigen + if(missing(variables)) { + variables <- 1:mf$k + } + istyle <- charmatch(style, c("bar", "lines"), nomatch = NA) + if(is.na(istyle) || istyle <= 1) + style <- "bar" + else { + style <- "lines" + } + if(style == "bar") { + loc <- barplot(vars[variables]/sum(vars), + names = paste("F", variables,sep = "."), + main = main, ylab = "Percentage of Variances", ...) + } + else { + loc <- 1:length(variables) + plot(loc, vars[variables]/sum(vars), type = "b", axes = F, main = main, + ylab = "Percentage of Variances", xlab = "") + axis(2) + axis(1, at = loc, labels = paste("F", variables, sep = ".")) + } + if(cumulative) { + cumv <- (cumsum(vars)/sum(vars))[variables] + text(loc, vars[variables] + par("cxy")[2], as.character(signif( + cumv, 3))) + } + invisible(loc) + } + # + # end of screenplot + # - - - - if (which.plot.single=="none") - - - # pca method - - if ( dim(x$asset.ret)[1] > dim(x$asset.ret)[2] ) { + if (plot.single==TRUE) { + ## inputs: + ## x lm object summarizing factor model fit. It is assumed that + ## time series date information is included in the names component + ## of the residuals, fitted and model components of the object. + ## asset.name charater. The name of the single asset to be ploted. + ## which.plot.single integer indicating which plot to create: + ## 1 time series plot of actual and fitted values + ## 2 time series plot of residuals with standard error bands + ## 3 time series plot of squared residuals + ## 4 time series plot of absolute residuals + ## 5 SACF and PACF of residuals + ## 6 SACF and PACF of squared residuals + ## 7 SACF and PACF of absolute residuals + ## 8 histogram of residuals with normal curve overlayed + ## 9 normal qq-plot of residuals + ## 10 CUSUM plot of recursive residuals + ## 11 CUSUM plot of OLS residuals + ## 12 CUSUM plot of recursive estimates relative to full sample estimates + ## 13 rolling estimates over 24 month window + which.plot.single<-which.plot.single[1] - fit.lm = x$asset.fit[[asset.name]] - - - ## exact information from lm object - - factorNames = colnames(fit.lm$model)[-1] - fit.formula = as.formula(paste(asset.name,"~", paste(factorNames, collapse="+"), sep=" ")) - #Date = try(as.Date(names(residuals(fit.lm)))) - #Date = try(as.yearmon(names(residuals(fit.lm)),"%b %Y")) - residuals.z = zoo(residuals(fit.lm), as.Date(names(residuals(fit.lm)))) - fitted.z = zoo(fitted(fit.lm), as.Date(names(fitted(fit.lm)))) - actual.z = zoo(fit.lm$model[,1], as.Date(rownames(fit.lm$model))) - tmp.summary = summary(fit.lm) - - which.plot.single<-menu(c("time series plot of actual and fitted values", - "time series plot of residuals with standard error bands", - "time series plot of squared residuals", - "time series plot of absolute residuals", - "SACF and PACF of residuals", - "SACF and PACF of squared residuals", - "SACF and PACF of absolute residuals", - "histogram of residuals with normal curve overlayed", - "normal qq-plot of residuals", - "CUSUM plot of recursive residuals", - "CUSUM plot of OLS residuals", - "CUSUM plot of recursive estimates relative to full sample estimates", - "rolling estimates over 24 month window"), - title="\nMake a plot selection (or 0 to exit):\n") - - switch(which.plot.single, - "1L" = { - ## time series plot of actual and fitted values - plot(actual.z, main=asset.name, ylab="Monthly performance", lwd=2, col="black") - lines(fitted.z, lwd=2, col="red") - abline(h=0) - legend(x="bottomleft", legend=c("Actual", "Fitted"), lwd=2, col=c("black","red")) - }, - - "2L" = { - ## time series plot of residuals with standard error bands - plot(residuals.z, main=asset.name, ylab="Monthly performance", lwd=2, col="black") - abline(h=0) - abline(h=2*tmp.summary$sigma, lwd=2, lty="dotted", col="red") - abline(h=-2*tmp.summary$sigma, lwd=2, lty="dotted", col="red") - legend(x="bottomleft", legend=c("Residual", "+/ 2*SE"), lwd=2, - lty=c("solid","dotted"), col=c("black","red")) - }, - "3L" = { - ## time series plot of squared residuals - plot(residuals.z^2, main=asset.name, ylab="Squared residual", lwd=2, col="black") - abline(h=0) - legend(x="topleft", legend="Squared Residuals", lwd=2, col="black") - }, - "4L" = { - ## time series plot of absolute residuals - plot(abs(residuals.z), main=asset.name, ylab="Absolute residual", lwd=2, col="black") - abline(h=0) - legend(x="topleft", legend="Absolute Residuals", lwd=2, col="black") - }, - "5L" = { - ## SACF and PACF of residuals - chart.ACFplus(residuals.z, main=paste("Residuals: ", asset.name, sep="")) - }, - "6L" = { - ## SACF and PACF of squared residuals - chart.ACFplus(residuals.z^2, main=paste("Residuals^2: ", asset.name, sep="")) - }, - "7L" = { - ## SACF and PACF of absolute residuals - chart.ACFplus(abs(residuals.z), main=paste("|Residuals|: ", asset.name, sep="")) - }, - "8L" = { - ## histogram of residuals with normal curve overlayed - chart.Histogram(residuals.z, methods="add.normal", main=paste("Residuals: ", asset.name, sep="")) - }, - "9L" = { - ## normal qq-plot of residuals - chart.QQPlot(residuals.z, envelope=0.95, main=paste("Residuals: ", asset.name, sep="")) - }, - "10L"= { - ## CUSUM plot of recursive residuals - - cusum.rec = efp(fit.formula, type="Rec-CUSUM", data=fit.lm$model) - plot(cusum.rec, sub=asset.name) - - }, - "11L"= { - ## CUSUM plot of OLS residuals - - cusum.ols = efp(fit.formula, type="OLS-CUSUM", data=fit.lm$model) - - }, - "12L"= { - ## CUSUM plot of recursive estimates relative to full sample estimates + + + if (which.plot.single=="none") + + + # pca method + + if ( dim(x$asset.ret)[1] > dim(x$asset.ret)[2] ) { + + + fit.lm = x$asset.fit[[asset.name]] + + + ## exact information from lm object + + factorNames = colnames(fit.lm$model)[-1] + fit.formula = as.formula(paste(asset.name,"~", paste(factorNames, collapse="+"), sep=" ")) + #Date = try(as.Date(names(residuals(fit.lm)))) + #Date = try(as.yearmon(names(residuals(fit.lm)),"%b %Y")) + residuals.z = zoo(residuals(fit.lm), as.Date(names(residuals(fit.lm)))) + fitted.z = zoo(fitted(fit.lm), as.Date(names(fitted(fit.lm)))) + actual.z = zoo(fit.lm$model[,1], as.Date(rownames(fit.lm$model))) + tmp.summary = summary(fit.lm) + + which.plot.single<-menu(c("time series plot of actual and fitted values", + "time series plot of residuals with standard error bands", + "time series plot of squared residuals", + "time series plot of absolute residuals", + "SACF and PACF of residuals", + "SACF and PACF of squared residuals", + "SACF and PACF of absolute residuals", + "histogram of residuals with normal curve overlayed", + "normal qq-plot of residuals", + "CUSUM plot of recursive residuals", + "CUSUM plot of OLS residuals", + "CUSUM plot of recursive estimates relative to full sample estimates", + "rolling estimates over 24 month window"), + title="\nMake a plot selection (or 0 to exit):\n") + + switch(which.plot.single, + "1L" = { + ## time series plot of actual and fitted values + plot(actual.z, main=asset.name, ylab="Monthly performance", lwd=2, col="black") + lines(fitted.z, lwd=2, col="red") + abline(h=0) + legend(x="bottomleft", legend=c("Actual", "Fitted"), lwd=2, col=c("black","red")) + }, + + "2L" = { + ## time series plot of residuals with standard error bands + plot(residuals.z, main=asset.name, ylab="Monthly performance", lwd=2, col="black") + abline(h=0) + abline(h=2*tmp.summary$sigma, lwd=2, lty="dotted", col="red") + abline(h=-2*tmp.summary$sigma, lwd=2, lty="dotted", col="red") + legend(x="bottomleft", legend=c("Residual", "+/ 2*SE"), lwd=2, + lty=c("solid","dotted"), col=c("black","red")) + }, + "3L" = { + ## time series plot of squared residuals + plot(residuals.z^2, main=asset.name, ylab="Squared residual", lwd=2, col="black") + abline(h=0) + legend(x="topleft", legend="Squared Residuals", lwd=2, col="black") + }, + "4L" = { + ## time series plot of absolute residuals + plot(abs(residuals.z), main=asset.name, ylab="Absolute residual", lwd=2, col="black") + abline(h=0) + legend(x="topleft", legend="Absolute Residuals", lwd=2, col="black") + }, + "5L" = { + ## SACF and PACF of residuals + chart.ACFplus(residuals.z, main=paste("Residuals: ", asset.name, sep="")) + }, + "6L" = { + ## SACF and PACF of squared residuals + chart.ACFplus(residuals.z^2, main=paste("Residuals^2: ", asset.name, sep="")) + }, + "7L" = { + ## SACF and PACF of absolute residuals + chart.ACFplus(abs(residuals.z), main=paste("|Residuals|: ", asset.name, sep="")) + }, + "8L" = { + ## histogram of residuals with normal curve overlayed + chart.Histogram(residuals.z, methods="add.normal", main=paste("Residuals: ", asset.name, sep="")) + }, + "9L" = { + ## normal qq-plot of residuals + chart.QQPlot(residuals.z, envelope=0.95, main=paste("Residuals: ", asset.name, sep="")) + }, + "10L"= { + ## CUSUM plot of recursive residuals - cusum.est = efp(fit.formula, type="fluctuation", data=fit.lm$model) - plot(cusum.est, functional=NULL, sub=asset.name) - - }, - "13L"= { - ## rolling regression over 24 month window - - rollReg <- function(data.z, formula) { - coef(lm(formula, data = as.data.frame(data.z))) - } - reg.z = zoo(fit.lm$model, as.Date(rownames(fit.lm$model))) - rollReg.z = rollapply(reg.z, FUN=rollReg, fit.formula, width=24, by.column = FALSE, - align="right") - plot(rollReg.z, main=paste("24-month rolling regression estimates:", asset.name, sep=" ")) - - }, - invisible() - ) - } else { #apca method + cusum.rec = efp(fit.formula, type="Rec-CUSUM", data=fit.lm$model) + plot(cusum.rec, sub=asset.name) + + }, + "11L"= { + ## CUSUM plot of OLS residuals + + cusum.ols = efp(fit.formula, type="OLS-CUSUM", data=fit.lm$model) + + }, + "12L"= { + ## CUSUM plot of recursive estimates relative to full sample estimates + + cusum.est = efp(fit.formula, type="fluctuation", data=fit.lm$model) + plot(cusum.est, functional=NULL, sub=asset.name) + + }, + "13L"= { + ## rolling regression over 24 month window + + rollReg <- function(data.z, formula) { + coef(lm(formula, data = as.data.frame(data.z))) + } + reg.z = zoo(fit.lm$model, as.Date(rownames(fit.lm$model))) + rollReg.z = rollapply(reg.z, FUN=rollReg, fit.formula, width=24, by.column = FALSE, + align="right") + plot(rollReg.z, main=paste("24-month rolling regression estimates:", asset.name, sep=" ")) + + }, + invisible() + ) + } else { #apca method + + dates <- names(x$data[,asset.name]) + actual.z <- zoo(x$asset.ret[,asset.name],as.Date(dates)) + residuals.z <- zoo(x$residuals,as.Date(dates)) + fitted.z <- actual.z - residuals.z + t <- length(dates) + k <- x$k + + which.plot.single<-menu(c("time series plot of actual and fitted values", + "time series plot of residuals with standard error bands", + "time series plot of squared residuals", + "time series plot of absolute residuals", + "SACF and PACF of residuals", + "SACF and PACF of squared residuals", + "SACF and PACF of absolute residuals", + "histogram of residuals with normal curve overlayed", + "normal qq-plot of residuals"), + title="\nMake a plot selection (or 0 to exit):\n") + switch(which.plot.single, + "1L" = { + # "time series plot of actual and fitted values", + + plot(actual.z[,asset.name], main=asset.name, ylab="Monthly performance", lwd=2, col="black") + lines(fitted.z[,asset.name], lwd=2, col="red") + abline(h=0) + legend(x="bottomleft", legend=c("Actual", "Fitted"), lwd=2, col=c("black","red")) + }, + "2L"={ + # "time series plot of residuals with standard error bands" + plot(residuals.z[,asset.name], main=asset.name, ylab="Monthly performance", lwd=2, col="black") + abline(h=0) + sigma = (sum(residuals.z[,asset.name]^2)*(t-k)^-1)^(1/2) + abline(h=2*sigma, lwd=2, lty="dotted", col="red") + abline(h=-2*sigma, lwd=2, lty="dotted", col="red") + legend(x="bottomleft", legend=c("Residual", "+/ 2*SE"), lwd=2, + lty=c("solid","dotted"), col=c("black","red")) + + }, + "3L"={ + # "time series plot of squared residuals" + plot(residuals.z[,asset.name]^2, main=asset.name, ylab="Squared residual", lwd=2, col="black") + abline(h=0) + legend(x="topleft", legend="Squared Residuals", lwd=2, col="black") + }, + "4L" = { + ## time series plot of absolute residuals + plot(abs(residuals.z[,asset.name]), main=asset.name, ylab="Absolute residual", lwd=2, col="black") + abline(h=0) + legend(x="topleft", legend="Absolute Residuals", lwd=2, col="black") + }, + "5L" = { + ## SACF and PACF of residuals + chart.ACFplus(residuals.z[,asset.name], main=paste("Residuals: ", asset.name, sep="")) + }, + "6L" = { + ## SACF and PACF of squared residuals + chart.ACFplus(residuals.z[,asset.name]^2, main=paste("Residuals^2: ", asset.name, sep="")) + }, + "7L" = { + ## SACF and PACF of absolute residuals + chart.ACFplus(abs(residuals.z[,asset.name]), main=paste("|Residuals|: ", asset.name, sep="")) + }, + "8L" = { + ## histogram of residuals with normal curve overlayed + chart.Histogram(residuals.z[,asset.name], methods="add.normal", main=paste("Residuals: ", asset.name, sep="")) + }, + "9L" = { + ## normal qq-plot of residuals + chart.QQPlot(residuals.z[,asset.name], envelope=0.95, main=paste("Residuals: ", asset.name, sep="")) + }, + invisible() ) + } - dates <- names(x$data[,asset.name]) - actual.z <- zoo(x$asset.ret[,asset.name],as.Date(dates)) - residuals.z <- zoo(x$residuals,as.Date(dates)) - fitted.z <- actual.z - residuals.z - t <- length(dates) - k <- x$k - which.plot.single<-menu(c("time series plot of actual and fitted values", - "time series plot of residuals with standard error bands", - "time series plot of squared residuals", - "time series plot of absolute residuals", - "SACF and PACF of residuals", - "SACF and PACF of squared residuals", - "SACF and PACF of absolute residuals", - "histogram of residuals with normal curve overlayed", - "normal qq-plot of residuals"), - title="\nMake a plot selection (or 0 to exit):\n") - switch(which.plot.single, - "1L" = { -# "time series plot of actual and fitted values", - - plot(actual.z[,asset.name], main=asset.name, ylab="Monthly performance", lwd=2, col="black") - lines(fitted.z[,asset.name], lwd=2, col="red") - abline(h=0) - legend(x="bottomleft", legend=c("Actual", "Fitted"), lwd=2, col=c("black","red")) + } else { + which.plot<-which.plot[1] + + + ## + ## 2. Plot selected choices. + ## + + + if(which.plot=='none') + which.plot <- menu(c("Screeplot of Eigenvalues", + "Factor Returns", + "FM Correlation", + "R square", + "Variance of Residuals", + "Factor Contributions to SD", + "Factor Contributions to ES", + "Factor Contributions to VaR"), title = + "\nMake a plot selection (or 0 to exit):\n") + + switch(which.plot, + "1L" = { + ## 1. screeplot. + if(missing(variables)) { + vars <- x$eigen + variables <- 1:x$k + } + screeplot(x, variables, cumulative, + style, "Screeplot of Eigenvalues") }, - "2L"={ -# "time series plot of residuals with standard error bands" - plot(residuals.z[,asset.name], main=asset.name, ylab="Monthly performance", lwd=2, col="black") - abline(h=0) - sigma = (sum(residuals.z[,asset.name]^2)*(t-k)^-1)^(1/2) - abline(h=2*sigma, lwd=2, lty="dotted", col="red") - abline(h=-2*sigma, lwd=2, lty="dotted", col="red") - legend(x="bottomleft", legend=c("Residual", "+/ 2*SE"), lwd=2, - lty=c("solid","dotted"), col=c("black","red")) - - }, - "3L"={ - # "time series plot of squared residuals" - plot(residuals.z[,asset.name]^2, main=asset.name, ylab="Squared residual", lwd=2, col="black") - abline(h=0) - legend(x="topleft", legend="Squared Residuals", lwd=2, col="black") - }, - "4L" = { - ## time series plot of absolute residuals - plot(abs(residuals.z[,asset.name]), main=asset.name, ylab="Absolute residual", lwd=2, col="black") - abline(h=0) - legend(x="topleft", legend="Absolute Residuals", lwd=2, col="black") + "2L" = { + ## + ## 2. factor returns + ## + if(missing(variables)) { + f.ret <- x$factors + } + plot.zoo(f.ret) + + } , + "3L" = { + cov.fm<- factorModelCovariance(t(x$loadings),var(x$factors), + x$resid.variance) + cor.fm = cov2cor(cov.fm) + rownames(cor.fm) = colnames(cor.fm) + ord <- order(cor.fm[1,]) + ordered.cor.fm <- cor.fm[ord, ord] + plotcorr(ordered.cor.fm[(1:max.show),(1:max.show)], col=cm.colors(11)[5*ordered.cor.fm + 6]) }, + "4L" ={ + barplot(x$r2[1:max.show]) + }, "5L" = { - ## SACF and PACF of residuals - chart.ACFplus(residuals.z[,asset.name], main=paste("Residuals: ", asset.name, sep="")) + barplot(x$resid.variance[1:max.show]) }, "6L" = { - ## SACF and PACF of squared residuals - chart.ACFplus(residuals.z[,asset.name]^2, main=paste("Residuals^2: ", asset.name, sep="")) + cov.factors = var(x$factors) + names = colnames(x$asset.ret) + factor.sd.decomp.list = list() + for (i in names) { + factor.sd.decomp.list[[i]] = + factorModelSdDecomposition(x$loadings[,i], + cov.factors, x$resid.variance[i]) + } + # function to extract contribution to sd from list + getCSD = function(x) { + x$cr.fm + } + # extract contributions to SD from list + cr.sd = sapply(factor.sd.decomp.list, getCSD) + rownames(cr.sd) = c(colnames(x$factors), "residual") + # create stacked barchart + barplot(cr.sd[,(1:max.show)], main="Factor Contributions to SD", + legend.text=T, args.legend=list(x="topleft")) + } , + "7L" ={ + factor.es.decomp.list = list() + names = colnames(x$asset.ret) + for (i in names) { + # check for missing values in fund data + idx = which(!is.na(x$asset.ret[,i])) + tmpData = cbind(x$asset.ret[idx,i], x$factors, + x$residuals[,i]/sqrt(x$resid.variance[i])) + colnames(tmpData)[c(1,length(tmpData[1,]))] = c(i, "residual") + factor.es.decomp.list[[i]] = + factorModelEsDecomposition(tmpData, + x$loadings[,i], + x$resid.variance[i], tail.prob=0.05,VaR.method=VaR.method) + } + + + # stacked bar charts of percent contributions to ES + getCETL = function(x) { + x$cES + } + # report as positive number + cr.etl = sapply(factor.es.decomp.list, getCETL) + rownames(cr.etl) = c(colnames(x$factors), "residual") + barplot(cr.etl[,(1:max.show)], main="Factor Contributions to ES", + legend.text=T, args.legend=list(x="topleft") ) }, - "7L" = { - ## SACF and PACF of absolute residuals - chart.ACFplus(abs(residuals.z[,asset.name]), main=paste("|Residuals|: ", asset.name, sep="")) - }, - "8L" = { - ## histogram of residuals with normal curve overlayed - chart.Histogram(residuals.z[,asset.name], methods="add.normal", main=paste("Residuals: ", asset.name, sep="")) - }, - "9L" = { - ## normal qq-plot of residuals - chart.QQPlot(residuals.z[,asset.name], envelope=0.95, main=paste("Residuals: ", asset.name, sep="")) - }, - invisible() ) - } - - - } else { - which.plot<-which.plot[1] - - - ## - ## 2. Plot selected choices. - ## - - - which.plot <- menu(c("Screeplot of Eigenvalues", - "Factor Returns", - "FM Correlation", - "R square", - "Variance of Residuals", - "Factor Contributions to SD", - "Factor Contributions to ES", - "Factor Contributions to VaR"), title = - "\nMake a plot selection (or 0 to exit):\n") + "8L" = { + factor.VaR.decomp.list = list() + names = colnames(x$asset.ret) + for (i in names) { + # check for missing values in fund data + idx = which(!is.na(x$asset.ret[,i])) + tmpData = cbind(x$asset.ret[idx,i], x$factors, + x$residuals[,i]/sqrt(x$resid.variance[i])) + colnames(tmpData)[c(1,length(tmpData[1,]))] = c(i, "residual") + factor.VaR.decomp.list[[i]] = + factorModelVaRDecomposition(tmpData, + x$loadings[,i], + x$resid.variance[i], tail.prob=0.05,VaR.method=VaR.method) + } + + + # stacked bar charts of percent contributions to VaR + getCVaR = function(x) { + x$cVaR.fm + } + # report as positive number + cr.var = sapply(factor.VaR.decomp.list, getCVaR) + rownames(cr.var) = c(colnames(x$factors), "residual") + barplot(cr.var[,(1:max.show)], main="Factor Contributions to VaR", + legend.text=T, args.legend=list(x="topleft")) + }, invisible() + + ) - switch(which.plot, - "1L" = { - ## - ## 1. screeplot. - ## - if(missing(variables)) { - vars <- x$eigen - n90 <- which(cumsum(vars)/ - sum(vars) > 0.9)[1] - variables <- 1:max(x$k, min(10, n90)) + } + } - screeplot(x, variables, cumulative, - style, "Screeplot") - }, - "2L" = { - ## - ## 2. factor returns - ## - if(missing(variables)) { - f.ret <- x$factors - } - plot.ts(f.ret) - -} , - "3L" = { - cov.fm<- factorModelCovariance(t(x$loadings),var(x$factors), - x$resid.variance) - cor.fm = cov2cor(cov.fm) - rownames(cor.fm) = colnames(cor.fm) - ord <- order(cor.fm[1,]) [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2839 From noreply at r-forge.r-project.org Tue Aug 20 21:32:30 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 21:32:30 +0200 (CEST) Subject: [Returnanalytics-commits] r2840 - pkg/FactorAnalytics/vignettes Message-ID: <20130820193230.D47771859DF@r-forge.r-project.org> Author: chenyian Date: 2013-08-20 21:32:30 +0200 (Tue, 20 Aug 2013) New Revision: 2840 Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: add section time series factor model for vignette Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-20 18:44:57 UTC (rev 2839) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-20 19:32:30 UTC (rev 2840) @@ -171,12 +171,12 @@ \section{Standardizing Factor Exposure} It is common to standardize factor exposure to have weight mean 0 and standard deviation equal to 1. The weight are often taken as proportional to square root of market capitalization, although other weighting schemes are possible. -We will try example 1 but with stardarized factor exposure with square root of market capitalization. First we create a weighting variable. +We will try example 1 but with standardized factor exposure with square root of market capitalization. First we create a weighting variable. <>= equity$weight <- sqrt(exp(equity$MV)) # we take log for MV before. @ -We can choose \verb at standardized.factor.exposure@ to be \verb at TRUE@ and \verb at weight.var@ equal to weighting variabel. +We can choose \verb at standardized.factor.exposure@ to be \verb at TRUE@ and \verb at weight.var@ equal to weighting variable. <>= fit.fund2 <- fitFundamentalFactorModel(exposure.names=c("BM","MV"), datevar="yearqtr",returnsvar ="RET", @@ -187,7 +187,7 @@ \section{Statistical Factor Model} -In statistical factor model, neighter factor exposure b (normally called factor loadings in statistical factor model) nor factor returns $f_t$ are observed in equation \ref{fm2}: +In statistical factor model, neither factor exposure b (normally called factor loadings in statistical factor model) nor factor returns $f_t$ are observed in equation \ref{fm2}: \begin{equation} r_t = bf_t + \epsilon_t\;,t=1 \cdots T \label{fm2} \end{equation} @@ -199,7 +199,7 @@ In some cases, when number of assets N is larger than number of time period T. Connor and Korajczyk (1986) develop an alternative method called asymptotic principal components, building on the approximate factor model theory of Chamberlain and Rothschild (1983). Connor and Korajczyk analyze the eigenvector of the T X T cross product of matrix returns rather then N X N covariance matrix of returns. They show the first k eigenvectors of this cross product matrix provide consistent estimates of the k X T matrix of factor returns. -We can use function \verb at fitStatisticalFactorModel@ to fit statistical factor model. First, we need asset returns in time series or xts format. We choose xts to work with because time index is easy to handle but this is not restircted to the model fit. +We can use function \verb at fitStatisticalFactorModel@ to fit statistical factor model. First, we need asset returns in time series or xts format. We choose xts to work with because time index is easy to handle but this is not restricted to the model fit. <>= library(xts) @@ -222,7 +222,7 @@ names(fit.stat) @ -5 factors is choosen by Bai and Ng (2002). Factor retunrs can be found using \verb@$factors at . +5 factors is chosen by Bai and Ng (2002). Factor returns can be found using \verb@$factors at . <>= fit.stat$k @ @@ -258,6 +258,45 @@ Similar to \verb at fitFundamentalFactorModel@, generic functions like \verb at summary@, \verb at print@, \verb at plot@ and \verb at predict@ can be used for statistical factor model. +\section{Time Series Factor Model} +In Time Series facto model, factor returns $f_t$ is observed and takens as macroeconomic time series like GDP or other time series like market returns or credit spread. In our package, we have provided some common used times series in data set \verb at CommonFactors@. \verb at factors@ is monthly time series and \verb at factors.Q@ is quarterly time series. +<>= +data(CommonFactors) +names(factors.Q) +@ +We can combine with our data \verb at ret@ and get rid of NA values. + +<>= +ts.factors <- xts(factors.Q,as.yearqtr(index(factors.Q),format="%Y-%m-%d")) +ts.data <- na.omit(merge(ret,ts.factors)) +@ + +We will use SP500, 10 years and 3 months term spread and difference of VIX as our factors. + +<>= +fit.time <- fitTimeSeriesFactorModel(assets.names=tic,factors.names=c("SP500","Term.Spread","dVIX"),data=ts.data,fit.method="OLS") +names(fit.time) +@ + +\veb at asset.fit@ can show model fit of each assets, for example for asset \verb at AA@. +<>= +summary(fit.time$asset.fit$AA) +@ +shows only beta of SP500 is significant. + +\verb at fitTimeSeriesFactorModel@ also have various variable selection algorithm to choose. One can include all the factor and let the function to decide which one is the best model. For example, we inlcude all common factors and use method \verb at stepwise@ which utilizes \verb at step@ function in \verb at stat@ package + +<>= +fit.time2 <- fitTimeSeriesFactorModel(assets.names=tic,factors.names=names(ts.factors),data=ts.data,fit.method="OLS",variable.selection = "stepwise") +@ +There are 5 factors chosen for asset AA for example. +<>= +fit.time2$asset.fit$AA +@ + +Generic functions like \verb at summary@, \verb at print@, \verb at plot@ and \verb at predict@ can be used for Time Series factor model as well like previous section. + + \end{document} \ No newline at end of file From noreply at r-forge.r-project.org Tue Aug 20 23:08:55 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 20 Aug 2013 23:08:55 +0200 (CEST) Subject: [Returnanalytics-commits] r2841 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130820210855.DFA1018499E@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-20 23:08:55 +0200 (Tue, 20 Aug 2013) New Revision: 2841 Added: pkg/PortfolioAnalytics/man/scatterFUN.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/applyFUN.R pkg/PortfolioAnalytics/R/chart.RiskReward.R pkg/PortfolioAnalytics/R/charts.DE.R pkg/PortfolioAnalytics/R/charts.GenSA.R pkg/PortfolioAnalytics/R/charts.PSO.R pkg/PortfolioAnalytics/R/charts.ROI.R pkg/PortfolioAnalytics/R/charts.RP.R pkg/PortfolioAnalytics/man/chart.RiskReward.Rd pkg/PortfolioAnalytics/man/charts.DE.Rd pkg/PortfolioAnalytics/man/charts.GenSA.Rd pkg/PortfolioAnalytics/man/charts.ROI.Rd pkg/PortfolioAnalytics/man/charts.RP.Rd pkg/PortfolioAnalytics/man/charts.pso.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd Log: Adding functionality to chart.Scatter.* functions to plot a risk-reward scatter of the assets. Added 'Optimal' label to the optimal portfolio in the scatter chart. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-20 21:08:55 UTC (rev 2841) @@ -96,6 +96,7 @@ export(return_objective) export(risk_budget_objective) export(rp_transform) +export(scatterFUN) export(set.portfolio.moments_v1) export(set.portfolio.moments_v2) export(set.portfolio.moments) Modified: pkg/PortfolioAnalytics/R/applyFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-20 21:08:55 UTC (rev 2841) @@ -81,4 +81,65 @@ out <- try(do.call(fun, .formals)) } return(out) -} \ No newline at end of file +} + +#' Apply a risk or return function to asset returns +#' +#' This function is used to calculate risk or return metrics given a matrix of +#' asset returns and will be used for a risk-reward scatter plot of the assets +#' +#' @param R +#' @param FUN +#' @param ... any passthrough arguments to FUN +#' @author Ross Bennett +#' @export +scatterFUN <- function(R, FUN, ...){ + nargs <- list(...) + + # match the FUN arg to a risk or return function + switch(FUN, + mean = { + return(as.numeric(apply(R, 2, mean))) + #fun = match.fun(mean) + #nargs$x = R + }, + sd =, + StdDev = { + fun = match.fun(StdDev) + }, + mVaR =, + VaR = { + fun = match.fun(VaR) + if(is.null(nargs$portfolio_method)) nargs$portfolio_method='single' + if(is.null(nargs$invert)) nargs$invert = FALSE + }, + es =, + mES =, + CVaR =, + cVaR =, + ETL =, + mETL =, + ES = { + fun = match.fun(ES) + if(is.null(nargs$portfolio_method)) nargs$portfolio_method='single' + if(is.null(nargs$invert)) nargs$invert = FALSE + }, +{ # see 'S Programming p. 67 for this matching + fun <- try(match.fun(FUN)) +} + ) # end switch block + + # calculate FUN on R + out <- rep(0, ncol(R)) + .formals <- formals(fun) + onames <- names(.formals) + for(i in 1:ncol(R)){ + nargs$R <- R[, i] + dargs <- nargs + pm <- pmatch(names(dargs), onames, nomatch = 0L) + names(dargs[pm > 0L]) <- onames[pm] + .formals[pm] <- dargs[pm > 0L] + out[i] <- try(do.call(fun, .formals)) + } + return(out) +} Modified: pkg/PortfolioAnalytics/R/chart.RiskReward.R =================================================================== --- pkg/PortfolioAnalytics/R/chart.RiskReward.R 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/R/chart.RiskReward.R 2013-08-20 21:08:55 UTC (rev 2841) @@ -8,6 +8,7 @@ #' @param rp TRUE/FALSE. If TRUE, random portfolios are generated by \code{\link{random_portfolios}} to view the feasible space #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis +#' @param chart.assets TRUE/FALSE. Includes a risk reward scatter of the assets in the chart #' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} #' @param element.color color for the default plot scatter points #' @seealso \code{\link{optimize.portfolio}} Modified: pkg/PortfolioAnalytics/R/charts.DE.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 21:08:55 UTC (rev 2841) @@ -77,7 +77,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.DE <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.DE <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8){ # more or less specific to the output of the DEoptim portfolio code with constraints # will work to a point with other functions, such as optimize.porfolio.parallel # there's still a lot to do to improve this. @@ -138,8 +138,25 @@ } # print(colnames(head(xtract))) + if(chart.assets){ + # Include risk reward scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN=return.col, ...=...) + asset_risk <- scatterFUN(R=R, FUN=risk.col, ...=...) + rnames <- colnames(R) + } else { + asset_ret <- NULL + asset_risk <- NULL + } + + # plot the portfolios from DEoptim_objective_results plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) + # plot the risk-reward scatter of the assets + if(chart.assets){ + points(x=asset_risk, y=asset_ret) + text(x=asset_risk, y=asset_ret, labels=colnames(R), pos=4, cex=0.8) + } + if(!is.null(neighbors)){ if(is.vector(neighbors)){ if(length(neighbors)==1){ @@ -188,7 +205,7 @@ w = w.traj[i,] x = unlist(constrained_objective(w=w, R=R, portfolio=portfolio, trace=TRUE)) - names(x)<-name.replace(names(x)) + names(x)<-PortfolioAnalytics:::name.replace(names(x)) if(is.null(trajnames)) trajnames<-names(x) if(is.null(rsc)){ rtc = pmatch(return.col,trajnames) @@ -224,7 +241,7 @@ result.slot<-'objective_measures' } objcols<-unlist(object[[result.slot]]) - names(objcols)<-name.replace(names(objcols)) + names(objcols)<-PortfolioAnalytics:::name.replace(names(objcols)) return.column = pmatch(return.col,names(objcols)) if(is.na(return.column)) { return.col = paste(return.col,return.col,sep='.') @@ -245,8 +262,10 @@ ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) points(risk, ret, col="blue", pch=16) #optimal + text(x=risk, y=ret, labels="Optimal",col="blue", pos=4, cex=0.8) } else { points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal + text(x=objcols[risk.column], y=objcols[return.column], labels="Optimal",col="blue", pos=4, cex=0.8) } axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) @@ -278,13 +297,13 @@ #' \code{\link{optimize.portfolio}} #' \code{\link{extractStats}} #' @export -charts.DE <- function(DE, risk.col, return.col, neighbors=NULL, main="DEoptim.Portfolios", ...){ +charts.DE <- function(DE, risk.col, return.col, chart.assets, neighbors=NULL, main="DEoptim.Portfolios", ...){ # Specific to the output of the random portfolio code with constraints # @TODO: check that DE is of the correct class op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.DE(object=DE, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) + chart.Scatter.DE(object=DE, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) par(mar=c(2,4,0,2)) chart.Weights.DE(object=DE, main="", neighbors=neighbors, ...) par(op) @@ -311,6 +330,6 @@ #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio.DEoptim <- function(x, ..., return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { - charts.DE(DE=x, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) +plot.optimize.portfolio.DEoptim <- function(x, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, main='optimized portfolio plot') { + charts.DE(DE=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) } Modified: pkg/PortfolioAnalytics/R/charts.GenSA.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 21:08:55 UTC (rev 2841) @@ -65,7 +65,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.GenSA <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", element.color="darkgray", cex.axis=0.8){ +chart.Scatter.GenSA <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, element.color="darkgray", cex.axis=0.8){ if(!inherits(object, "optimize.portfolio.GenSA")) stop("object must be of class 'optimize.portfolio.GenSA'") @@ -75,6 +75,8 @@ permutations <- match.call(expand.dots=TRUE)$permutations if(is.null(permutations)) permutations <- 2000 rp <- random_portfolios(portfolio=object$portfolio, permutations=permutations) + } else { + rp = NULL } # Get the optimal weights from the output of optimize.portfolio @@ -86,8 +88,31 @@ returnpoints <- applyFUN(R=R, weights=rp, FUN=return.col, ...=...) riskpoints <- applyFUN(R=R, weights=rp, FUN=risk.col, ...=...) - plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) + if(chart.assets){ + # Include risk reward scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN=return.col, ...=...) + asset_risk <- scatterFUN(R=R, FUN=risk.col, ...=...) + rnames <- colnames(R) + } else { + asset_ret <- NULL + asset_risk <- NULL + } + + # get limits for x and y axis + ylim <- range(returnpoints, asset_ret) + xlim <- range(riskpoints, asset_risk) + + # Plot the portfolios + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", ylim=ylim, xlim=xlim, axes=FALSE, ...) points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal + text(x=riskpoints[1], y=returnpoints[1], labels="Optimal",col="blue", pos=4, cex=0.8) + + # plot the risk-reward scatter of the assets + if(chart.assets){ + points(x=asset_risk, y=asset_ret) + text(x=asset_risk, y=asset_ret, labels=colnames(R), pos=4, cex=0.8) + } + axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) @@ -114,13 +139,12 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="StdDev", - cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ +charts.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ # Specific to the output of the optimize_method=GenSA op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,2),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.GenSA(object=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + chart.Scatter.GenSA(object=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) par(mar=c(2,4,0,2)) chart.Weights.GenSA(object=GenSA, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") par(op) @@ -143,6 +167,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="StdDev", cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ - charts.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) +plot.optimize.portfolio.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ + charts.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 21:08:55 UTC (rev 2841) @@ -65,7 +65,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.pso <- function(object, neighbors=NULL, ..., return.col="mean", risk.col="ES", element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.pso <- function(object, neighbors=NULL, ..., return.col="mean", risk.col="ES", chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8){ if(!inherits(object, "optimize.portfolio.pso")) stop("object must be of class 'optimize.portfolio.pso'") R <- object$R # Object with the "out" value in the first column and the normalized weights @@ -78,8 +78,31 @@ returnpoints <- applyFUN(R=R, weights=wts, FUN=return.col, ...=...) riskpoints <- applyFUN(R=R, weights=wts, FUN=risk.col, ...=...) - plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) + if(chart.assets){ + # Include risk reward scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN=return.col, ...=...) + asset_risk <- scatterFUN(R=R, FUN=risk.col, ...=...) + rnames <- colnames(R) + } else { + asset_ret <- NULL + asset_risk <- NULL + } + + # get limits for x and y axis + ylim <- range(returnpoints, asset_ret) + xlim <- range(riskpoints, asset_risk) + + # plot the portfolios + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, xlim=xlim, ylim=ylim, col="darkgray", axes=FALSE, ...) points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal + text(x=riskpoints[1], y=returnpoints[1], labels="Optimal",col="blue", pos=4, cex=0.8) + + # plot the risk-reward scatter of the assets + if(chart.assets){ + points(x=asset_risk, y=asset_ret) + text(x=asset_risk, y=asset_ret, labels=colnames(R), pos=4, cex=0.8) + } + axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) @@ -105,13 +128,12 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.pso <- function(pso, return.col="mean", risk.col="StdDev", - cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ +charts.pso <- function(pso, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ # Specific to the output of the optimize_method=pso op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,2),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.pso(object=pso, return.col=return.col, risk.col=risk.col, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + chart.Scatter.pso(object=pso, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) par(mar=c(2,4,0,2)) chart.Weights.pso(object=pso, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") par(op) @@ -133,7 +155,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.pso <- function(pso, return.col="mean", risk.col="StdDev", - cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ - charts.pso(pso=pso, return.col=return.col, risk.col=risk.col, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) +plot.optimize.portfolio.pso <- function(pso, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ + charts.pso(pso=pso, return.col=return.col, risk.col=risk.col, chart.assets=FALSE, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 21:08:55 UTC (rev 2841) @@ -65,7 +65,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.ROI <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.ROI <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8){ if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class 'optimize.portfolio.ROI'") @@ -75,6 +75,8 @@ permutations <- match.call(expand.dots=TRUE)$permutations if(is.null(permutations)) permutations <- 2000 rp <- random_portfolios(portfolio=object$portfolio, permutations=permutations) + } else { + rp = NULL } # Get the optimal weights from the output of optimize.portfolio @@ -86,8 +88,32 @@ returnpoints <- applyFUN(R=R, weights=rp, FUN=return.col, ...=...) riskpoints <- applyFUN(R=R, weights=rp, FUN=risk.col, ...=...) - plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) + if(chart.assets){ + # Include risk reward scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN=return.col, ...=...) + asset_risk <- scatterFUN(R=R, FUN=risk.col, ...=...) + rnames <- colnames(R) + } else { + asset_ret <- NULL + asset_risk <- NULL + } + + # get limits for x and y axis + ylim <- range(returnpoints, asset_ret) + xlim <- range(riskpoints, asset_risk) + + # Plot the portfolios + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", ylim=ylim, xlim=xlim, axes=FALSE, ...) + # Plot the optimal portfolio points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal + text(x=riskpoints[1], y=returnpoints[1], labels="Optimal",col="blue", pos=4, cex=0.8) + + # plot the risk-reward scatter of the assets + if(chart.assets){ + points(x=asset_risk, y=asset_ret) + text(x=asset_risk, y=asset_ret, labels=colnames(R), pos=4, cex=0.8) + } + axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) @@ -117,14 +143,13 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.ROI <- function(ROI, rp=FALSE, risk.col="StdDev", return.col="mean", - cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ +charts.ROI <- function(ROI, rp=FALSE, risk.col="StdDev", return.col="mean", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ # Specific to the output of the optimize_method=ROI R <- ROI$R op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.ROI(ROI, rp=rp, return.col=return.col, risk.col=risk.col, ..., element.color=element.color, cex.axis=cex.axis, main=main) + chart.Scatter.ROI(ROI, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, ..., element.color=element.color, cex.axis=cex.axis, main=main) par(mar=c(2,4,0,2)) chart.Weights.ROI(ROI, neighbors=neighbors, ..., main="", las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis) par(op) @@ -150,6 +175,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.ROI <- function(ROI, rp=FALSE, risk.col="StdDev", return.col="mean", element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ - charts.ROI(ROI=ROI, rp=rp, risk.col=risk.col, return.col=return.col, main=main, ...) +plot.optimize.portfolio.ROI <- function(ROI, rp=FALSE, risk.col="ES", return.col="mean", chart.assets=chart.assets, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ + charts.ROI(ROI=ROI, rp=rp, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, main=main, ...) } Modified: pkg/PortfolioAnalytics/R/charts.RP.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-20 21:08:55 UTC (rev 2841) @@ -81,7 +81,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.RP <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.RP <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8){ # more or less specific to the output of the random portfolio code with constraints # will work to a point with other functions, such as optimize.porfolio.parallel # there's still a lot to do to improve this. @@ -141,13 +141,30 @@ } # print(colnames(head(xtract))) + if(chart.assets){ + # Include risk reward scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN=return.col, ...=...) + asset_risk <- scatterFUN(R=R, FUN=risk.col, ...=...) + rnames <- colnames(R) + } else { + asset_ret <- NULL + asset_risk <- NULL + } + + # plot the random portfolios plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) + # plot the risk-reward scatter of the assets + if(chart.assets){ + points(x=asset_risk, y=asset_ret) + text(x=asset_risk, y=asset_ret, labels=colnames(R), pos=4, cex=0.8) + } + if(!is.null(neighbors)){ if(is.vector(neighbors)){ if(length(neighbors)==1){ # overplot nearby portfolios defined by 'out' - orderx = order(xtract[,"out"]) #TODO this won't work if the objective is anything othchart.Scatter.er than mean + orderx = order(xtract[,"out"]) #TODO this won't work if the objective is anything other than mean subsetx = head(xtract[orderx,], n=neighbors) } else{ # assume we have a vector of portfolio numbers @@ -205,8 +222,10 @@ ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) points(risk, ret, col="blue", pch=16) #optimal + text(x=risk, y=ret, labels="Optimal",col="blue", pos=4, cex=0.8) } else { points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal + text(x=objcols[risk.column], y=objcols[return.column], labels="Optimal",col="blue", pos=4, cex=0.8) } axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) @@ -238,13 +257,13 @@ #' \code{\link{optimize.portfolio}} #' \code{\link{extractStats}} #' @export -charts.RP <- function(RP, risk.col, return.col, neighbors=NULL, main="Random.Portfolios", ...){ +charts.RP <- function(RP, risk.col, return.col, chart.assets=FALSE, neighbors=NULL, main="Random.Portfolios", ...){ # Specific to the output of the random portfolio code with constraints # @TODO: check that RP is of the correct class op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.RP(object=RP, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) + chart.Scatter.RP(object=RP, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) par(mar=c(2,4,0,2)) chart.Weights.RP(object=RP, main="", neighbors=neighbors, ...) par(op) @@ -271,8 +290,8 @@ #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio.random <- function(x, ..., R=NULL, return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { - charts.RP(RP=x, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) +plot.optimize.portfolio.random <- function(x, ..., R=NULL, return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, main='optimized portfolio plot') { + charts.RP(RP=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) } #' plot method for optimize.portfolio output @@ -296,6 +315,6 @@ #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio <- function(x, ..., return.col='mean', risk.col='ES', neighbors=NULL, main='optimized portfolio plot') { - charts.RP(RP=x, risk.col=risk.col, return.col=return.col, neighbors=neighbors, main=main, ...) +plot.optimize.portfolio <- function(x, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, main='optimized portfolio plot') { + charts.RP(RP=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) } \ No newline at end of file Modified: pkg/PortfolioAnalytics/man/chart.RiskReward.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.RiskReward.Rd 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/man/chart.RiskReward.Rd 2013-08-20 21:08:55 UTC (rev 2841) @@ -14,48 +14,53 @@ \usage{ chart.Scatter.DE(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", - element.color = "darkgray", cex.axis = 0.8) + chart.assets = FALSE, element.color = "darkgray", + cex.axis = 0.8) chart.RiskReward.optimize.portfolio.DEoptim(object, neighbors = NULL, ..., return.col = "mean", - risk.col = "ES", element.color = "darkgray", - cex.axis = 0.8) + risk.col = "ES", chart.assets = FALSE, + element.color = "darkgray", cex.axis = 0.8) chart.Scatter.RP(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", - element.color = "darkgray", cex.axis = 0.8) + chart.assets = FALSE, element.color = "darkgray", + cex.axis = 0.8) chart.RiskReward.optimize.portfolio.random(object, neighbors = NULL, ..., return.col = "mean", - risk.col = "ES", element.color = "darkgray", - cex.axis = 0.8) + risk.col = "ES", chart.assets = FALSE, + element.color = "darkgray", cex.axis = 0.8) chart.Scatter.ROI(object, neighbors = NULL, ..., rp = FALSE, return.col = "mean", risk.col = "ES", - element.color = "darkgray", cex.axis = 0.8) + chart.assets = FALSE, element.color = "darkgray", + cex.axis = 0.8) chart.RiskReward.optimize.portfolio.ROI(object, neighbors = NULL, ..., rp = FALSE, return.col = "mean", - risk.col = "ES", element.color = "darkgray", - cex.axis = 0.8) + risk.col = "ES", chart.assets = FALSE, + element.color = "darkgray", cex.axis = 0.8) chart.Scatter.pso(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", - element.color = "darkgray", cex.axis = 0.8) + chart.assets = FALSE, element.color = "darkgray", + cex.axis = 0.8) chart.RiskReward.optimize.portfolio.pso(object, neighbors = NULL, ..., return.col = "mean", - risk.col = "ES", element.color = "darkgray", - cex.axis = 0.8) + risk.col = "ES", chart.assets = FALSE, + element.color = "darkgray", cex.axis = 0.8) chart.Scatter.GenSA(object, neighbors = NULL, ..., rp = FALSE, return.col = "mean", risk.col = "ES", - element.color = "darkgray", cex.axis = 0.8) + chart.assets = FALSE, element.color = "darkgray", + cex.axis = 0.8) chart.RiskReward.optimize.portfolio.GenSA(object, neighbors = NULL, ..., rp = FALSE, return.col = "mean", - risk.col = "ES", element.color = "darkgray", - cex.axis = 0.8) + risk.col = "ES", chart.assets = FALSE, + element.color = "darkgray", cex.axis = 0.8) chart.RiskReward(object, neighbors, ..., rp = FALSE, return.col = "mean", risk.col = "ES", @@ -80,6 +85,9 @@ \item{risk.col}{string matching the objective of a 'risk' objective, on horizontal axis} + \item{chart.assets}{TRUE/FALSE. Includes a risk reward + scatter of the assets in the chart} + \item{cex.axis}{The magnification to be used for axis annotation relative to the current setting of \code{cex}} Modified: pkg/PortfolioAnalytics/man/charts.DE.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.DE.Rd 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/man/charts.DE.Rd 2013-08-20 21:08:55 UTC (rev 2841) @@ -2,8 +2,8 @@ \alias{charts.DE} \title{scatter and weights chart for random portfolios} \usage{ - charts.DE(DE, risk.col, return.col, neighbors = NULL, - main = "DEoptim.Portfolios", ...) + charts.DE(DE, risk.col, return.col, chart.assets, + neighbors = NULL, main = "DEoptim.Portfolios", ...) } \arguments{ \item{DE}{set of random portfolios created by Modified: pkg/PortfolioAnalytics/man/charts.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.GenSA.Rd 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/man/charts.GenSA.Rd 2013-08-20 21:08:55 UTC (rev 2841) @@ -2,8 +2,8 @@ \alias{charts.GenSA} \title{scatter and weights chart for portfolios} \usage{ - charts.GenSA(GenSA, rp = NULL, return.col = "mean", - risk.col = "StdDev", cex.axis = 0.8, + charts.GenSA(GenSA, rp = FALSE, return.col = "mean", + risk.col = "ES", chart.assets = FALSE, cex.axis = 0.8, element.color = "darkgray", neighbors = NULL, main = "GenSA.Portfolios", ...) } Modified: pkg/PortfolioAnalytics/man/charts.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.ROI.Rd 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/man/charts.ROI.Rd 2013-08-20 21:08:55 UTC (rev 2841) @@ -2,10 +2,10 @@ \alias{charts.ROI} \title{scatter and weights chart for portfolios} \usage{ - charts.ROI(ROI, rp = NULL, risk.col = "StdDev", - return.col = "mean", cex.axis = 0.8, - element.color = "darkgray", neighbors = NULL, - main = "ROI.Portfolios", ...) + charts.ROI(ROI, rp = FALSE, risk.col = "StdDev", + return.col = "mean", chart.assets = FALSE, + cex.axis = 0.8, element.color = "darkgray", + neighbors = NULL, main = "ROI.Portfolios", ...) } \arguments{ \item{ROI}{object created by Modified: pkg/PortfolioAnalytics/man/charts.RP.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.RP.Rd 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/man/charts.RP.Rd 2013-08-20 21:08:55 UTC (rev 2841) @@ -2,7 +2,7 @@ \alias{charts.RP} \title{scatter and weights chart for random portfolios} \usage{ - charts.RP(RP, R = NULL, risk.col, return.col, + charts.RP(RP, risk.col, return.col, chart.assets = FALSE, neighbors = NULL, main = "Random.Portfolios", ...) } \arguments{ Modified: pkg/PortfolioAnalytics/man/charts.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.pso.Rd 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/man/charts.pso.Rd 2013-08-20 21:08:55 UTC (rev 2841) @@ -2,9 +2,10 @@ \alias{charts.pso} \title{scatter and weights chart for portfolios} \usage{ - charts.pso(pso, return.col = "mean", risk.col = "StdDev", - cex.axis = 0.8, element.color = "darkgray", - neighbors = NULL, main = "PSO.Portfolios", ...) + charts.pso(pso, return.col = "mean", risk.col = "ES", + chart.assets = FALSE, cex.axis = 0.8, + element.color = "darkgray", neighbors = NULL, + main = "PSO.Portfolios", ...) } \arguments{ \item{pso}{object created by Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd 2013-08-20 19:32:30 UTC (rev 2840) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd 2013-08-20 21:08:55 UTC (rev 2841) @@ -3,7 +3,8 @@ \title{plot method for optimize.portfolio.DEoptim output} \usage{ plot.optimize.portfolio.DEoptim(x, ..., - return.col = "mean", risk.col = "ES", neighbors = NULL, + return.col = "mean", risk.col = "ES", + chart.assets = FALSE, neighbors = NULL, main = "optimized portfolio plot") } \arguments{ Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd 2013-08-20 19:32:30 UTC (rev 2840) [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2841 From noreply at r-forge.r-project.org Wed Aug 21 02:46:37 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 21 Aug 2013 02:46:37 +0200 (CEST) Subject: [Returnanalytics-commits] r2842 - in pkg/PortfolioAnalytics: R man Message-ID: <20130821004637.6778C18106D@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-21 02:46:37 +0200 (Wed, 21 Aug 2013) New Revision: 2842 Modified: pkg/PortfolioAnalytics/R/chart.RiskReward.R pkg/PortfolioAnalytics/R/charts.DE.R pkg/PortfolioAnalytics/R/charts.GenSA.R pkg/PortfolioAnalytics/R/charts.PSO.R pkg/PortfolioAnalytics/R/charts.ROI.R pkg/PortfolioAnalytics/R/charts.RP.R pkg/PortfolioAnalytics/man/chart.RiskReward.Rd pkg/PortfolioAnalytics/man/charts.DE.Rd pkg/PortfolioAnalytics/man/charts.GenSA.Rd pkg/PortfolioAnalytics/man/charts.ROI.Rd pkg/PortfolioAnalytics/man/charts.RP.Rd pkg/PortfolioAnalytics/man/charts.pso.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd Log: adding xlim and ylim arguments to chart.Scatter.* for better control of plotting Modified: pkg/PortfolioAnalytics/R/chart.RiskReward.R =================================================================== --- pkg/PortfolioAnalytics/R/chart.RiskReward.R 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/R/chart.RiskReward.R 2013-08-21 00:46:37 UTC (rev 2842) @@ -11,9 +11,11 @@ #' @param chart.assets TRUE/FALSE. Includes a risk reward scatter of the assets in the chart #' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} #' @param element.color color for the default plot scatter points +#' @param xlim set the x-axis limit, same as in \code{\link{plot}} +#' @param ylim set the y-axis limit, same as in \code{\link{plot}} #' @seealso \code{\link{optimize.portfolio}} #' @export -chart.RiskReward <- function(object, neighbors, ..., rp=FALSE, return.col="mean", risk.col="ES", element.color = "darkgray", cex.axis=0.8){ +chart.RiskReward <- function(object, neighbors, ..., rp=FALSE, return.col="mean", risk.col="ES", element.color = "darkgray", cex.axis=0.8, ylim=NULL, xlim=NULL){ UseMethod("chart.RiskReward") } Modified: pkg/PortfolioAnalytics/R/charts.DE.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-21 00:46:37 UTC (rev 2842) @@ -77,7 +77,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.DE <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.DE <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8, xlim=NULL, ylim=NULL){ # more or less specific to the output of the DEoptim portfolio code with constraints # will work to a point with other functions, such as optimize.porfolio.parallel # there's still a lot to do to improve this. @@ -143,13 +143,15 @@ asset_ret <- scatterFUN(R=R, FUN=return.col, ...=...) asset_risk <- scatterFUN(R=R, FUN=risk.col, ...=...) rnames <- colnames(R) + xlim <- range(c(xtract[,risk.column], asset_risk)) + ylim <- range(c(xtract[,return.column], asset_ret)) } else { asset_ret <- NULL asset_risk <- NULL } # plot the portfolios from DEoptim_objective_results - plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) + plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, xlim=xlim, ylim=ylim, ...) # plot the risk-reward scatter of the assets if(chart.assets){ @@ -297,13 +299,13 @@ #' \code{\link{optimize.portfolio}} #' \code{\link{extractStats}} #' @export -charts.DE <- function(DE, risk.col, return.col, chart.assets, neighbors=NULL, main="DEoptim.Portfolios", ...){ +charts.DE <- function(DE, risk.col, return.col, chart.assets, neighbors=NULL, main="DEoptim.Portfolios", xlim=NULL, ylim=NULL, ...){ # Specific to the output of the random portfolio code with constraints # @TODO: check that DE is of the correct class op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.DE(object=DE, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) + chart.Scatter.DE(object=DE, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...) par(mar=c(2,4,0,2)) chart.Weights.DE(object=DE, main="", neighbors=neighbors, ...) par(op) @@ -330,6 +332,6 @@ #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio.DEoptim <- function(x, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, main='optimized portfolio plot') { - charts.DE(DE=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) +plot.optimize.portfolio.DEoptim <- function(x, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, xlim=NULL, ylim=NULL, main='optimized portfolio plot') { + charts.DE(DE=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...) } Modified: pkg/PortfolioAnalytics/R/charts.GenSA.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-21 00:46:37 UTC (rev 2842) @@ -65,7 +65,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.GenSA <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, element.color="darkgray", cex.axis=0.8){ +chart.Scatter.GenSA <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, element.color="darkgray", cex.axis=0.8, ylim=NULL, xlim=NULL){ if(!inherits(object, "optimize.portfolio.GenSA")) stop("object must be of class 'optimize.portfolio.GenSA'") @@ -99,8 +99,12 @@ } # get limits for x and y axis - ylim <- range(returnpoints, asset_ret) - xlim <- range(riskpoints, asset_risk) + if(is.null(ylim)){ + ylim <- range(returnpoints, asset_ret) + } + if(is.null(xlim)){ + xlim <- range(riskpoints, asset_risk) + } # Plot the portfolios plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", ylim=ylim, xlim=xlim, axes=FALSE, ...) @@ -139,12 +143,12 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ +charts.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", xlim=NULL, ylim=NULL, ...){ # Specific to the output of the optimize_method=GenSA op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,2),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.GenSA(object=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + chart.Scatter.GenSA(object=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, element.color=element.color, cex.axis=cex.axis, main=main, xlim=xlim, ylim=ylim, ...=...) par(mar=c(2,4,0,2)) chart.Weights.GenSA(object=GenSA, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") par(op) @@ -167,6 +171,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", ...){ - charts.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) +plot.optimize.portfolio.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", xlim=NULL, ylim=NULL, ...){ + charts.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-21 00:46:37 UTC (rev 2842) @@ -65,7 +65,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.pso <- function(object, neighbors=NULL, ..., return.col="mean", risk.col="ES", chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.pso <- function(object, neighbors=NULL, ..., return.col="mean", risk.col="ES", chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8, xlim=NULL, ylim=NULL){ if(!inherits(object, "optimize.portfolio.pso")) stop("object must be of class 'optimize.portfolio.pso'") R <- object$R # Object with the "out" value in the first column and the normalized weights @@ -89,8 +89,12 @@ } # get limits for x and y axis - ylim <- range(returnpoints, asset_ret) - xlim <- range(riskpoints, asset_risk) + if(is.null(ylim)){ + ylim <- range(returnpoints, asset_ret) + } + if(is.null(xlim)){ + xlim <- range(riskpoints, asset_risk) + } # plot the portfolios plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, xlim=xlim, ylim=ylim, col="darkgray", axes=FALSE, ...) @@ -128,12 +132,12 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.pso <- function(pso, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ +charts.pso <- function(pso, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", xlim=NULL, ylim=NULL, ...){ # Specific to the output of the optimize_method=pso op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,2),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.pso(object=pso, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, element.color=element.color, cex.axis=cex.axis, main=main, ...=...) + chart.Scatter.pso(object=pso, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, element.color=element.color, cex.axis=cex.axis, main=main, xlim=xlim, ylim=ylim, ...=...) par(mar=c(2,4,0,2)) chart.Weights.pso(object=pso, neighbors=neighbors, las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis, ...=..., main="") par(op) @@ -155,6 +159,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.pso <- function(pso, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", ...){ - charts.pso(pso=pso, return.col=return.col, risk.col=risk.col, chart.assets=FALSE, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, ...=...) +plot.optimize.portfolio.pso <- function(pso, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", xlim=NULL, ylim=NULL, ...){ + charts.pso(pso=pso, return.col=return.col, risk.col=risk.col, chart.assets=FALSE, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-21 00:46:37 UTC (rev 2842) @@ -65,7 +65,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.ROI <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.ROI <- function(object, neighbors=NULL, ..., rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8, xlim=NULL, ylim=NULL){ if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class 'optimize.portfolio.ROI'") @@ -99,11 +99,15 @@ } # get limits for x and y axis - ylim <- range(returnpoints, asset_ret) - xlim <- range(riskpoints, asset_risk) + if(is.null(ylim)){ + ylim <- range(c(returnpoints, asset_ret)) + } + if(is.null(xlim)){ + xlim <- range(c(riskpoints, asset_risk)) + } # Plot the portfolios - plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", ylim=ylim, xlim=xlim, axes=FALSE, ...) + plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, col="darkgray", xlim=xlim, ylim=ylim, axes=FALSE, ...) # Plot the optimal portfolio points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal text(x=riskpoints[1], y=returnpoints[1], labels="Optimal",col="blue", pos=4, cex=0.8) @@ -143,15 +147,14 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -charts.ROI <- function(ROI, rp=FALSE, risk.col="StdDev", return.col="mean", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ +charts.ROI <- function(ROI, rp=FALSE, risk.col="ES", return.col="mean", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", xlim=NULL, ylim=NULL, ...){ # Specific to the output of the optimize_method=ROI - R <- ROI$R op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.ROI(ROI, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, ..., element.color=element.color, cex.axis=cex.axis, main=main) + chart.Scatter.ROI(object=ROI, rp=rp, return.col=return.col, risk.col=risk.col, ..., chart.assets=chart.assets, element.color=element.color, cex.axis=cex.axis, main=main, xlim=xlim, ylim=ylim) par(mar=c(2,4,0,2)) - chart.Weights.ROI(ROI, neighbors=neighbors, ..., main="", las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis) + chart.Weights.ROI(object=ROI, neighbors=neighbors, ..., main="", las=3, xlab=NULL, cex.lab=1, element.color=element.color, cex.axis=cex.axis) par(op) } @@ -175,6 +178,6 @@ #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.ROI <- function(ROI, rp=FALSE, risk.col="ES", return.col="mean", chart.assets=chart.assets, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", ...){ - charts.ROI(ROI=ROI, rp=rp, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, main=main, ...) +plot.optimize.portfolio.ROI <- function(ROI, rp=FALSE, risk.col="ES", return.col="mean", chart.assets=chart.assets, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", xlim=NULL, ylim=NULL, ...){ + charts.ROI(ROI=ROI, rp=rp, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, main=main, xlim=xlim, ylim=ylim, ...) } Modified: pkg/PortfolioAnalytics/R/charts.RP.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-21 00:46:37 UTC (rev 2842) @@ -81,7 +81,7 @@ #' @rdname chart.RiskReward #' @export -chart.Scatter.RP <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8){ +chart.Scatter.RP <- function(object, neighbors = NULL, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8, xlim=NULL, ylim=NULL){ # more or less specific to the output of the random portfolio code with constraints # will work to a point with other functions, such as optimize.porfolio.parallel # there's still a lot to do to improve this. @@ -146,13 +146,15 @@ asset_ret <- scatterFUN(R=R, FUN=return.col, ...=...) asset_risk <- scatterFUN(R=R, FUN=risk.col, ...=...) rnames <- colnames(R) + xlim <- range(c(xtract[,risk.column], asset_risk)) + ylim <- range(c(xtract[,return.column], asset_ret)) } else { asset_ret <- NULL asset_risk <- NULL } # plot the random portfolios - plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, ...) + plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, xlim=xlim, ylim=ylim, ...) # plot the risk-reward scatter of the assets if(chart.assets){ @@ -257,13 +259,13 @@ #' \code{\link{optimize.portfolio}} #' \code{\link{extractStats}} #' @export -charts.RP <- function(RP, risk.col, return.col, chart.assets=FALSE, neighbors=NULL, main="Random.Portfolios", ...){ +charts.RP <- function(RP, risk.col, return.col, chart.assets=FALSE, neighbors=NULL, main="Random.Portfolios", xlim=NULL, ylim=NULL, ...){ # Specific to the output of the random portfolio code with constraints # @TODO: check that RP is of the correct class op <- par(no.readonly=TRUE) layout(matrix(c(1,2)),height=c(2,1.5),width=1) par(mar=c(4,4,4,2)) - chart.Scatter.RP(object=RP, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) + chart.Scatter.RP(object=RP, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, xlim=NULL, ylim=NULL, ...) par(mar=c(2,4,0,2)) chart.Weights.RP(object=RP, main="", neighbors=neighbors, ...) par(op) @@ -290,8 +292,8 @@ #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio.random <- function(x, ..., R=NULL, return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, main='optimized portfolio plot') { - charts.RP(RP=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) +plot.optimize.portfolio.random <- function(x, ..., R=NULL, return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, xlim=NULL, ylim=NULL, main='optimized portfolio plot') { + charts.RP(RP=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...) } #' plot method for optimize.portfolio output @@ -315,6 +317,6 @@ #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} #' @export -plot.optimize.portfolio <- function(x, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, main='optimized portfolio plot') { - charts.RP(RP=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, ...) +plot.optimize.portfolio <- function(x, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, xlim=NULL, ylim=NULL, main='optimized portfolio plot') { + charts.RP(RP=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...) } \ No newline at end of file Modified: pkg/PortfolioAnalytics/man/chart.RiskReward.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.RiskReward.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/chart.RiskReward.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -15,56 +15,62 @@ chart.Scatter.DE(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", chart.assets = FALSE, element.color = "darkgray", - cex.axis = 0.8) + cex.axis = 0.8, xlim = NULL, ylim = NULL) chart.RiskReward.optimize.portfolio.DEoptim(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", chart.assets = FALSE, - element.color = "darkgray", cex.axis = 0.8) + element.color = "darkgray", cex.axis = 0.8, + xlim = NULL, ylim = NULL) chart.Scatter.RP(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", chart.assets = FALSE, element.color = "darkgray", - cex.axis = 0.8) + cex.axis = 0.8, xlim = NULL, ylim = NULL) chart.RiskReward.optimize.portfolio.random(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", chart.assets = FALSE, - element.color = "darkgray", cex.axis = 0.8) + element.color = "darkgray", cex.axis = 0.8, + xlim = NULL, ylim = NULL) chart.Scatter.ROI(object, neighbors = NULL, ..., rp = FALSE, return.col = "mean", risk.col = "ES", chart.assets = FALSE, element.color = "darkgray", - cex.axis = 0.8) + cex.axis = 0.8, xlim = NULL, ylim = NULL) chart.RiskReward.optimize.portfolio.ROI(object, neighbors = NULL, ..., rp = FALSE, return.col = "mean", risk.col = "ES", chart.assets = FALSE, - element.color = "darkgray", cex.axis = 0.8) + element.color = "darkgray", cex.axis = 0.8, + xlim = NULL, ylim = NULL) chart.Scatter.pso(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", chart.assets = FALSE, element.color = "darkgray", - cex.axis = 0.8) + cex.axis = 0.8, xlim = NULL, ylim = NULL) chart.RiskReward.optimize.portfolio.pso(object, neighbors = NULL, ..., return.col = "mean", risk.col = "ES", chart.assets = FALSE, - element.color = "darkgray", cex.axis = 0.8) + element.color = "darkgray", cex.axis = 0.8, + xlim = NULL, ylim = NULL) chart.Scatter.GenSA(object, neighbors = NULL, ..., rp = FALSE, return.col = "mean", risk.col = "ES", chart.assets = FALSE, element.color = "darkgray", - cex.axis = 0.8) + cex.axis = 0.8, ylim = NULL, xlim = NULL) chart.RiskReward.optimize.portfolio.GenSA(object, neighbors = NULL, ..., rp = FALSE, return.col = "mean", risk.col = "ES", chart.assets = FALSE, - element.color = "darkgray", cex.axis = 0.8) + element.color = "darkgray", cex.axis = 0.8, + ylim = NULL, xlim = NULL) chart.RiskReward(object, neighbors, ..., rp = FALSE, return.col = "mean", risk.col = "ES", - element.color = "darkgray", cex.axis = 0.8) + element.color = "darkgray", cex.axis = 0.8, + ylim = NULL, xlim = NULL) } \arguments{ \item{object}{optimal portfolio created by @@ -93,6 +99,12 @@ \item{element.color}{color for the default plot scatter points} + + \item{xlim}{set the x-axis limit, same as in + \code{\link{plot}}} + + \item{ylim}{set the y-axis limit, same as in + \code{\link{plot}}} } \description{ classic risk reward scatter Modified: pkg/PortfolioAnalytics/man/charts.DE.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.DE.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/charts.DE.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -3,7 +3,8 @@ \title{scatter and weights chart for random portfolios} \usage{ charts.DE(DE, risk.col, return.col, chart.assets, - neighbors = NULL, main = "DEoptim.Portfolios", ...) + neighbors = NULL, main = "DEoptim.Portfolios", + xlim = NULL, ylim = NULL, ...) } \arguments{ \item{DE}{set of random portfolios created by Modified: pkg/PortfolioAnalytics/man/charts.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.GenSA.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/charts.GenSA.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -5,7 +5,8 @@ charts.GenSA(GenSA, rp = FALSE, return.col = "mean", risk.col = "ES", chart.assets = FALSE, cex.axis = 0.8, element.color = "darkgray", neighbors = NULL, - main = "GenSA.Portfolios", ...) + main = "GenSA.Portfolios", xlim = NULL, ylim = NULL, + ...) } \arguments{ \item{GenSA}{object created by Modified: pkg/PortfolioAnalytics/man/charts.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.ROI.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/charts.ROI.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -2,10 +2,11 @@ \alias{charts.ROI} \title{scatter and weights chart for portfolios} \usage{ - charts.ROI(ROI, rp = FALSE, risk.col = "StdDev", + charts.ROI(ROI, rp = FALSE, risk.col = "ES", return.col = "mean", chart.assets = FALSE, cex.axis = 0.8, element.color = "darkgray", - neighbors = NULL, main = "ROI.Portfolios", ...) + neighbors = NULL, main = "ROI.Portfolios", xlim = NULL, + ylim = NULL, ...) } \arguments{ \item{ROI}{object created by Modified: pkg/PortfolioAnalytics/man/charts.RP.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.RP.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/charts.RP.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -3,7 +3,8 @@ \title{scatter and weights chart for random portfolios} \usage{ charts.RP(RP, risk.col, return.col, chart.assets = FALSE, - neighbors = NULL, main = "Random.Portfolios", ...) + neighbors = NULL, main = "Random.Portfolios", + xlim = NULL, ylim = NULL, ...) } \arguments{ \item{RP}{set of random portfolios created by Modified: pkg/PortfolioAnalytics/man/charts.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/charts.pso.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/charts.pso.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -5,7 +5,7 @@ charts.pso(pso, return.col = "mean", risk.col = "ES", chart.assets = FALSE, cex.axis = 0.8, element.color = "darkgray", neighbors = NULL, - main = "PSO.Portfolios", ...) + main = "PSO.Portfolios", xlim = NULL, ylim = NULL, ...) } \arguments{ \item{pso}{object created by Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -4,8 +4,8 @@ \usage{ plot.optimize.portfolio.DEoptim(x, ..., return.col = "mean", risk.col = "ES", - chart.assets = FALSE, neighbors = NULL, - main = "optimized portfolio plot") + chart.assets = FALSE, neighbors = NULL, xlim = NULL, + ylim = NULL, main = "optimized portfolio plot") } \arguments{ \item{x}{set of portfolios created by Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -6,7 +6,8 @@ return.col = "mean", risk.col = "ES", chart.assets = FALSE, cex.axis = 0.8, element.color = "darkgray", neighbors = NULL, - main = "GenSA.Portfolios", ...) + main = "GenSA.Portfolios", xlim = NULL, ylim = NULL, + ...) } \arguments{ \item{GenSA}{object created by Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -6,7 +6,7 @@ risk.col = "ES", return.col = "mean", chart.assets = chart.assets, element.color = "darkgray", neighbors = NULL, - main = "ROI.Portfolios", ...) + main = "ROI.Portfolios", xlim = NULL, ylim = NULL, ...) } \arguments{ \item{ROI}{object created by Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -4,7 +4,8 @@ \usage{ plot.optimize.portfolio(x, ..., return.col = "mean", risk.col = "ES", chart.assets = FALSE, - neighbors = NULL, main = "optimized portfolio plot") + neighbors = NULL, xlim = NULL, ylim = NULL, + main = "optimized portfolio plot") } \arguments{ \item{x}{set of portfolios created by Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -5,7 +5,7 @@ plot.optimize.portfolio.pso(pso, return.col = "mean", risk.col = "ES", chart.assets = FALSE, cex.axis = 0.8, element.color = "darkgray", neighbors = NULL, - main = "PSO.Portfolios", ...) + main = "PSO.Portfolios", xlim = NULL, ylim = NULL, ...) } \arguments{ \item{pso}{object created by Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd 2013-08-20 21:08:55 UTC (rev 2841) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.random.Rd 2013-08-21 00:46:37 UTC (rev 2842) @@ -4,8 +4,8 @@ \usage{ plot.optimize.portfolio.random(x, ..., R = NULL, return.col = "mean", risk.col = "ES", - chart.assets = FALSE, neighbors = NULL, - main = "optimized portfolio plot") + chart.assets = FALSE, neighbors = NULL, xlim = NULL, + ylim = NULL, main = "optimized portfolio plot") } \arguments{ \item{x}{set of portfolios created by From noreply at r-forge.r-project.org Wed Aug 21 07:01:14 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 21 Aug 2013 07:01:14 +0200 (CEST) Subject: [Returnanalytics-commits] r2843 - in pkg/PortfolioAnalytics: R man Message-ID: <20130821050114.A2A57183C75@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-21 07:01:14 +0200 (Wed, 21 Aug 2013) New Revision: 2843 Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R pkg/PortfolioAnalytics/man/extract.efficient.frontier.Rd Log: Modifying extract.efficient.frontier to accept an objected created by optimize.portfolio or pass in a portfolio object. Also modified function to get the efficient frontier at the maximum return instead of minimum out Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 00:46:37 UTC (rev 2842) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 05:01:14 UTC (rev 2843) @@ -10,46 +10,48 @@ # ############################################################################### -#' extract the efficient frontier of portfolios that meet your objectives over a range of risks +#' Extract the efficient frontier of portfolios that meet your objectives over a range of risks #' -#' note that this function will be extremely sensitive to the objectives in your -#' \code{\link{constraint}} object. It will be especially obvious if you +#' The efficient frontier is extracted from the set of portfolios created by +#' \code{optimize.portfolio} with \code{trace=TRUE}. +#' +#' If you do not have an optimal portfolio object created by +#' \code{\link{optimize.portfolio}}, you can pass in a portfolio object and an +#' optimization will be run via \code{\link{optimize.portfolio}} +#' +#' @note +#' Note that this function will be extremely sensitive to the objectives in your +#' \code{\link{portfolio}} object. It will be especially obvious if you #' are looking at a risk budget objective and your return preference is not set high enough. #' -#' If you do not have a set of portfolios to extract from, portfolios may be generated automatically, which would take a very long time. #' -#' @param portfolios set of portfolios as generated by \code{\link{extractStats}} +#' @param object optimial portfolio object as created by \code{\link{optimize.portfolio}} #' @param from minimum value of the sequence #' @param to maximum value of the sequence #' @param by number to increment the sequence by #' @param match.col string name of column to use for risk (horizontal axis) -#' @param \dots any other passthru parameters +#' @param \dots any other passthru parameters to \code{optimize.portfolio} #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns -#' @param constraints an object of type "constraints" specifying the constraints for the optimization, see \code{\link{constraint}} -#' @param optimize_method one of "DEoptim" or "random" +#' @param portfolio an object of type "portfolio" specifying the constraints and objectives for the optimization, see \code{\link{portfolio.spec}} +#' @param optimize_method one of "DEoptim", "random", "ROI", "pso", or "GenSA" #' @export -extract.efficient.frontier <- -function (portfolios=NULL, - match.col='ES', from=0, to=1, by=.005, - ..., - R = NULL, - constraints = NULL, - optimize_method='random') +extract.efficient.frontier <- function (object=NULL, match.col='ES', from=0, to=1, by=0.005, ..., R=NULL, portfolio=NULL, optimize_method='random') { #TODO add a threshold argument for how close it has to be to count # do we need to recalc the constrained_objective too? I don't think so. + if(!inherits(object, "optimize.portfolio")) stop("object passed in must of of class 'portfolio'") set<-seq(from=from,to=to,by=by) set<-cbind(quantmod::Lag(set,1),as.matrix(set))[-1,] - if(is.null(portfolios)){ - if(!is.null(R)){ - portfolios<-optimize.portfolio(constraints=constraints, R=R, optimize_method=optimize_method, ...) + if(is.null(object)){ + if(!is.null(R) & !is.null(portfolio)){ + portfolios<-optimize.portfolio(portfolio=portfolio, R=R, optimize_method=optimize_method[1], trace=TRUE, ...) } else { - stop('you must specify either a portfolio set or a return series') + stop('you must specify a portfolio object and a return series or an objective of class optimize.portfolio') } } - xtract<-extractStats(portfolios) + xtract<-extractStats(object) columnnames=colnames(xtract) #if("package:multicore" %in% search() || require("multicore",quietly = TRUE)){ # mclapply @@ -67,7 +69,7 @@ result <- foreach(i=1:nrow(set),.inorder=TRUE, .combine=rbind, .errorhandling='remove') %do% { tmp<-xtract[which(xtract[,mtc]>=set[i,1] & xtract[,mtc] Author: rossbennett34 Date: 2013-08-21 19:36:36 +0200 (Wed, 21 Aug 2013) New Revision: 2844 Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R pkg/PortfolioAnalytics/R/extractstats.R Log: Modifying extractStats method for optimize.portfolio.ROI objects. Modifying extract.efficient.frontier to use the max 'mean' as opposed to min 'out' Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 05:01:14 UTC (rev 2843) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 17:36:36 UTC (rev 2844) @@ -68,7 +68,8 @@ result <- foreach(i=1:nrow(set),.inorder=TRUE, .combine=rbind, .errorhandling='remove') %do% { tmp<-xtract[which(xtract[,mtc]>=set[i,1] & xtract[,mtc] Author: chenyian Date: 2013-08-21 20:26:19 +0200 (Wed, 21 Aug 2013) New Revision: 2845 Modified: pkg/FactorAnalytics/R/factorModelVaRDecomposition.R pkg/FactorAnalytics/R/plot.StatFactorModel.r pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: add risk analysis in vignette. modify some codes. Modified: pkg/FactorAnalytics/R/factorModelVaRDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-21 17:36:36 UTC (rev 2844) +++ pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-21 18:26:19 UTC (rev 2845) @@ -44,7 +44,7 @@ #' fit.macro <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), #' factors.names=c("EDHEC.LS.EQ","SP500.TR"), #' data=managers.df,fit.method="OLS") -#' # risk factor contribution to ETL +#' # risk factor contribution to VaR #' # combine fund returns, factor returns and residual returns for HAM1 #' tmpData = cbind(managers.df[,1],managers.df[,c("EDHEC.LS.EQ","SP500.TR")] , #' residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) Modified: pkg/FactorAnalytics/R/plot.StatFactorModel.r =================================================================== --- pkg/FactorAnalytics/R/plot.StatFactorModel.r 2013-08-21 17:36:36 UTC (rev 2844) +++ pkg/FactorAnalytics/R/plot.StatFactorModel.r 2013-08-21 18:26:19 UTC (rev 2845) @@ -389,7 +389,7 @@ factorModelSdDecomposition(x$loadings[,i], cov.factors, x$resid.variance[i]) } - # function to extract contribution to sd from list + # function to extract component contribution to sd from list getCSD = function(x) { x$cr.fm } @@ -397,8 +397,8 @@ cr.sd = sapply(factor.sd.decomp.list, getCSD) rownames(cr.sd) = c(colnames(x$factors), "residual") # create stacked barchart - barplot(cr.sd[,(1:max.show)], main="Factor Contributions to SD", - legend.text=T, args.legend=list(x="topleft")) + barplot(cr.sd[,(1:max.show)], main="Factor Contributions to SD",...) +# legend.text=T, args.legend=list(x="topright")) } , "7L" ={ factor.es.decomp.list = list() @@ -416,15 +416,15 @@ } - # stacked bar charts of percent contributions to ES + # stacked bar charts of component contributions to ES getCETL = function(x) { x$cES } # report as positive number cr.etl = sapply(factor.es.decomp.list, getCETL) rownames(cr.etl) = c(colnames(x$factors), "residual") - barplot(cr.etl[,(1:max.show)], main="Factor Contributions to ES", - legend.text=T, args.legend=list(x="topleft") ) + barplot(cr.etl[,(1:max.show)], main="Factor Contributions to ES",...) +# legend.text=T, args.legend=list(x="topright") ) }, "8L" = { factor.VaR.decomp.list = list() @@ -442,15 +442,15 @@ } - # stacked bar charts of percent contributions to VaR + # stacked bar charts of component contributions to VaR getCVaR = function(x) { x$cVaR.fm } # report as positive number cr.var = sapply(factor.VaR.decomp.list, getCVaR) rownames(cr.var) = c(colnames(x$factors), "residual") - barplot(cr.var[,(1:max.show)], main="Factor Contributions to VaR", - legend.text=T, args.legend=list(x="topleft")) + barplot(cr.var[,(1:max.show)], main="Factor Contributions to VaR",...) +# legend.text=T, args.legend=list(x="topright")) }, invisible() ) Modified: pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-21 17:36:36 UTC (rev 2844) +++ pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-21 18:26:19 UTC (rev 2845) @@ -57,7 +57,7 @@ fit.macro <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), factors.names=c("EDHEC.LS.EQ","SP500.TR"), data=managers.df,fit.method="OLS") -# risk factor contribution to ETL +# risk factor contribution to VaR # combine fund returns, factor returns and residual returns for HAM1 tmpData = cbind(managers.df[,1],managers.df[,c("EDHEC.LS.EQ","SP500.TR")] , residuals(fit.macro$asset.fit$HAM1)/sqrt(fit.macro$resid.variance[1])) Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-21 17:36:36 UTC (rev 2844) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-21 18:26:19 UTC (rev 2845) @@ -1,5 +1,7 @@ \documentclass{article} \usepackage[utf8]{inputenc} +\usepackage{amsmath} +\usepackage{amsthm} % \VignetteIndexEntry{test file} % \VignetteKeywords{facor model, risk analytics} @@ -248,7 +250,7 @@ \begin{figure} \begin{center} -<>= +<>= <> @ \end{center} @@ -259,7 +261,7 @@ Similar to \verb at fitFundamentalFactorModel@, generic functions like \verb at summary@, \verb at print@, \verb at plot@ and \verb at predict@ can be used for statistical factor model. \section{Time Series Factor Model} -In Time Series facto model, factor returns $f_t$ is observed and takens as macroeconomic time series like GDP or other time series like market returns or credit spread. In our package, we have provided some common used times series in data set \verb at CommonFactors@. \verb at factors@ is monthly time series and \verb at factors.Q@ is quarterly time series. +In Time Series factor model, factor returns $f_t$ is observed and taken as macroeconomic time series like GDP or other time series like market returns or credit spread. In our package, we have provided some common used times series in data set \verb at CommonFactors@. \verb at factors@ is monthly time series and \verb at factors.Q@ is quarterly time series. <>= data(CommonFactors) @@ -277,17 +279,16 @@ <>= fit.time <- fitTimeSeriesFactorModel(assets.names=tic,factors.names=c("SP500","Term.Spread","dVIX"),data=ts.data,fit.method="OLS") -names(fit.time) @ -\veb at asset.fit@ can show model fit of each assets, for example for asset \verb at AA@. +\verb at asset.fit@ can show model fit for each assets, for example for asset \verb at AA@. <>= -summary(fit.time$asset.fit$AA) +fit.time$asset.fit$AA @ -shows only beta of SP500 is significant. -\verb at fitTimeSeriesFactorModel@ also have various variable selection algorithm to choose. One can include all the factor and let the function to decide which one is the best model. For example, we inlcude all common factors and use method \verb at stepwise@ which utilizes \verb at step@ function in \verb at stat@ package +\verb at fitTimeSeriesFactorModel@ also have various variable selection algorithm to choose. One can include all the factor and let the function to decide which one is the best model. For example, we include all common factors and use method \verb at stepwise@ which utilizes \verb at step@ function in \verb at stat@ package + <>= fit.time2 <- fitTimeSeriesFactorModel(assets.names=tic,factors.names=names(ts.factors),data=ts.data,fit.method="OLS",variable.selection = "stepwise") @ @@ -296,7 +297,83 @@ fit.time2$asset.fit$AA @ -Generic functions like \verb at summary@, \verb at print@, \verb at plot@ and \verb at predict@ can be used for Time Series factor model as well like previous section. +Generic functions like \verb at summary@, \verb at print@, \verb at plot@ and \verb at predict@ can also be used for time series factor model as previous section. +\section{Risk Analysis} +\subsection{Factor Model Risk Budgeting} +One can do risk analysis easily with factor model. According to Meucci (2007), factor model can be represented as + +\begin{align} +r_{it} &= \alpha_i + \beta_{i1}f_{1t} + \beta_{i2}f_{2t} + \cdots + \beta_{ik}f_{kt} + \sigma{i}z_{it},\;i=1 \cdots N,\;t=1 \cdots T \\ + &= \alpha_i + \tilde{\beta_i}'\tilde{F_t} +\end{align} + +where $z_{it}$ is the standardized residuals and $\epsilon_{it} / \sigma_i = z_{it}$, $\tilde{\beta_i} = [\beta_{1i},\dots,\beta_{ki}, \sigma_i]$, $\tilde{F_t}=[f_{1t},\dots,f_{kt}, z_{it}]$ + +Common risk measures like standard deviation, value-at-risk and expected shortfall are function of homogeneous of degree 1. By Euler theoreom, risk metrics (RM) can be decomposed to +\begin{align} +RM_i = \beta_{1i}\frac{\partial RM_i}{\partial \beta_{1i}} + \beta_{2i}\frac{\partial RM_i}{\partial \beta_{2i}} + \cdots + \beta_{ki}\frac{\partial RM_i}{\partial \beta_{ki}} + \sigma_{i}\frac{\partial RM_i}{\partial \sigma_{i}} +\end{align} + +where\\ +$\frac{\partial RM_i}{\partial \beta_{ki}}$ is called marginal contribution of factor k to $RM_i$ \\ +$\beta_{ki}\frac{\partial RM_i}{\partial \beta_{ki}}$ is called component contribution of factor k to $RM_i$ \\ +$\beta_{ki}\frac{\partial RM_i}{\partial \beta_{ki}}/RM_i$ is called percentage contribution of factor k to $RM_i$ + +\verb at factorAnalytics@ package provide 3 different risk metrics decomposition, Standard deviation (Std), Value-at-Risk (VaR) and Expected Shortfall (ES) with historical distribution, Normal distribution and Cornish-Fisher distribution. + +For example, factor model VaR decomposition with Normal distribution of asset AA for a statistical factor model. +<>= +data.rd <- cbind(ret[,"AA"],fit.stat$factors,fit.stat$residuals[,"AA"]/sqrt(fit.stat$resid.variance["AA"])) +var.decp <- factorModelVaRDecomposition(data.rd,fit.stat$loadings[,"AA"], + fit.stat$resid.variance["AA"],tail.prob=0.05, + VaR.method="gaussian") +names(var.decp) +@ + +VaR, number of exceed, id of exceed, marginal contribution to VaR, component contribution to VaR and percentage contribution to VaR are computed. Let see VaR and component contribution to VaR +<>= +var.decp$VaR.fm +var.decp$cVaR.fm +@ +It looks like the second factor contributes the largest risk to asset AA. + +One can use \verb at plot@ method to see barplot of risk budgeting. Default is to show 6 assets. Figure \ref{fig4} shows componenet contribution to VaR for several different assets. + +<>= +plot(fit.stat,which.plot=8,legend.text=TRUE, args.legend=list(x="topright"), + VaR.method="gaussian") +@ + +\begin{figure} +\begin{center} +<>= +<> +@ +\end{center} +\caption{Component Contribution to VaR for Statistical Factor Model. } +\label{fig4} +\end{figure} + +\subsection{Portfolio Risk Budgeting} + +Let $Rp_t = Rp_t(w)$ denote the portfolio return based on the vector of portfolio weights w. Let RM(w) denote a portfolio risk measure. + +\begin{align} +RM = w_{1}\frac{\partial RM}{\partial w_{1}} + w_{2}\frac{\partial RM}{\partial w_{2}} + \cdots + w_{N}\frac{\partial RM}{\partial w_{N}} +\end{align} + +where\\ +$\frac{\partial RM}{\partial w_{i}}$ is called marginal contribution of asset i to RM \\ +$w_{i}\frac{\partial RM}{\partial w_{i}}$ is called component contribution of asset i to RM \\ +$w_{i}\frac{\partial RM}{\partial w_{i}}/RM$ is called percentage contribution of asset i to RM + +we can use function \verb at VaR()@ in \verb at PerformanceAnalytics@. Suppose we have an eqaully weighted portfolio of 63 assets in data set \verb at ret@. The following code can compute portfolio VaR, component contribution to VaR and percentage contribution to VaR + +<>= +VaR(R=ret,method="gaussian",portfolio_method="component") +@ + + \end{document} \ No newline at end of file From noreply at r-forge.r-project.org Wed Aug 21 22:34:46 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 21 Aug 2013 22:34:46 +0200 (CEST) Subject: [Returnanalytics-commits] r2846 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130821203446.F3FED185818@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-21 22:34:46 +0200 (Wed, 21 Aug 2013) New Revision: 2846 Added: pkg/PortfolioAnalytics/man/meanvar.efficient.frontier.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/extract.efficient.frontier.R Log: adding function to calculate a mean-variance efficient frontier Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-21 18:26:19 UTC (rev 2845) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-21 20:34:46 UTC (rev 2846) @@ -61,6 +61,7 @@ export(is.constraint) export(is.objective) export(is.portfolio) +export(meanvar.efficient.frontier) export(minmax_objective) export(objective) export(optimize.portfolio_v2) Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 18:26:19 UTC (rev 2845) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 20:34:46 UTC (rev 2846) @@ -74,3 +74,59 @@ } return(result) } + +#' Generate the efficient frontier for a mean-variance portfolio +#' +#' This function generates the mean-variance efficient frontier of a portfolio +#' specifying constraints and objectives. To generate the mean-var efficient +#' frontier, the portfolio must have two objectives 1) "mean" and 2) "var". +#' +#' @param portfolio a portfolio object with constraints and objectives created via \code{\link{portfolio.spec}} +#' @param R an xts or matrix of asset returns +#' @param n.portfolios number of portfolios to plot along the efficient frontier +#' @return a matrix of objective measure values and weights along the efficient frontier +#' @author Ross Bennett +#' @export +meanvar.efficient.frontier <- function(portfolio, R, n.portfolios=25){ + if(!is.portfolio(portfolio)) stop("portfolio object must be of class 'portfolio'") + # step 1: find the minimum return given the constraints + # step 2: find the maximum return given the constraints + # step 3: 'step' along the returns and run the optimization to calculate + # the weights and objective measures along the efficient frontier + + # for a mean-var efficient frontier, there must be two objectives 1) "mean" and 2) "var" + # get the names of the objectives + objnames <- unlist(lapply(portfolio$objectives, function(x) x$name)) + if(!((length(objnames) == 2) & ("var" %in% objnames) & ("mean" %in% objnames))){ + stop("The portfolio object must have both 'mean' and 'var' specified as objectives") + } + # get the index number of the var objective + var_idx <- which(unlist(lapply(portfolio$objectives, function(x) x$name)) == "var") + # get the index number of the mean objective + mean_idx <- which(unlist(lapply(portfolio$objectives, function(x) x$name)) == "mean") + + # set the risk_aversion to a very small number for equivalent to max return portfolio + portfolio$objectives[[var_idx]]$risk_aversion <- 1e-6 + + # run the optimization to get the maximum return + tmp <- optimize.portfolio(R=ret, portfolio=portfolio, optimize_method="ROI") + maxret <- extractObjectiveMeasures(tmp)$mean + + # set the risk_aversion to a very large number equivalent to a minvar portfolio + portfolio$objectives[[var_idx]]$risk_aversion <- 1e6 + tmp <- optimize.portfolio(R=ret, portfolio=portfolio, optimize_method="ROI") + stats <- extractStats(tmp) + minret <- stats["mean"] + + # length.out is the number of portfolios to create + ret_seq <- seq(from=minret, to=maxret, length.out=n.portfolios) + + out <- matrix(0, nrow=length(ret_seq), ncol=length(extractStats(tmp))) + + for(i in 1:length(ret_seq)){ + portfolio$objectives[[mean_idx]]$target <- ret_seq[i] + out[i, ] <- extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) + } + colnames(out) <- names(stats) + return(out) +} Added: pkg/PortfolioAnalytics/man/meanvar.efficient.frontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/meanvar.efficient.frontier.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/meanvar.efficient.frontier.Rd 2013-08-21 20:34:46 UTC (rev 2846) @@ -0,0 +1,31 @@ +\name{meanvar.efficient.frontier} +\alias{meanvar.efficient.frontier} +\title{Generate the efficient frontier for a mean-variance portfolio} +\usage{ + meanvar.efficient.frontier(portfolio, R, + n.portfolios = 25) +} +\arguments{ + \item{portfolio}{a portfolio object with constraints and + objectives created via \code{\link{portfolio.spec}}} + + \item{R}{an xts or matrix of asset returns} + + \item{n.portfolios}{number of portfolios to plot along + the efficient frontier} +} +\value{ + a matrix of objective measure values and weights along + the efficient frontier +} +\description{ + This function generates the mean-variance efficient + frontier of a portfolio specifying constraints and + objectives. To generate the mean-var efficient frontier, + the portfolio must have two objectives 1) "mean" and 2) + "var". +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Thu Aug 22 00:25:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 00:25:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2847 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130821222510.ED61918588F@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-22 00:25:10 +0200 (Thu, 22 Aug 2013) New Revision: 2847 Added: pkg/PortfolioAnalytics/man/meanetl.efficient.frontier.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/extract.efficient.frontier.R Log: adding function to calculate mean-etl efficient frontier Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-21 20:34:46 UTC (rev 2846) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-21 22:25:10 UTC (rev 2847) @@ -61,6 +61,7 @@ export(is.constraint) export(is.objective) export(is.portfolio) +export(meanetl.efficient.frontier) export(meanvar.efficient.frontier) export(minmax_objective) export(objective) Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 20:34:46 UTC (rev 2846) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 22:25:10 UTC (rev 2847) @@ -109,12 +109,12 @@ portfolio$objectives[[var_idx]]$risk_aversion <- 1e-6 # run the optimization to get the maximum return - tmp <- optimize.portfolio(R=ret, portfolio=portfolio, optimize_method="ROI") + tmp <- optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI") maxret <- extractObjectiveMeasures(tmp)$mean # set the risk_aversion to a very large number equivalent to a minvar portfolio portfolio$objectives[[var_idx]]$risk_aversion <- 1e6 - tmp <- optimize.portfolio(R=ret, portfolio=portfolio, optimize_method="ROI") + tmp <- optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI") stats <- extractStats(tmp) minret <- stats["mean"] @@ -130,3 +130,71 @@ colnames(out) <- names(stats) return(out) } + +#' Generate the efficient frontier for a mean-etl portfolio +#' +#' This function generates the mean-etl efficient frontier of a portfolio +#' specifying constraints and objectives. To generate the mean-var efficient +#' frontier, the portfolio must have two objectives 1) "mean" and 2) "ETL/ES/CVaR". If +#' the only objective in the \code{portfolio} object is ETL/ES/CVaR, the we will +#' add a mean objective. +#' +#' @param portfolio a portfolio object with constraints and objectives created via \code{\link{portfolio.spec}} +#' @param R an xts or matrix of asset returns +#' @param n.portfolios number of portfolios to plot along the efficient frontier +#' @return a matrix of objective measure values and weights along the efficient frontier +#' @author Ross Bennett +#' @export +meanetl.efficient.frontier <- function(portfolio, R, n.portfolios=25){ + if(!is.portfolio(portfolio)) stop("portfolio object must be of class 'portfolio'") + # step 1: find the minimum return given the constraints + # step 2: find the maximum return given the constraints + # step 3: 'step' along the returns and run the optimization to calculate + # the weights and objective measures along the efficient frontier + + objnames <- unlist(lapply(portfolio$objectives, function(x) x$name)) + + # The user might pass in a portfolio with only ES/ETL/CVaR as an objective + if(length(objnames) == 1 & objnames %in% c("ETL", "ES", "CVaR")){ + # Add the mean objective to the portfolio + portfolio <- add.objective(portfolio=portfolio, type="return", name="mean") + # get the objective names again after we add an objective to the portfolio + objnames <- unlist(lapply(portfolio$objectives, function(x) x$name)) + } + + # for a mean-etl efficient frontier, there must be two objectives 1) "mean" and 2) "ETL/ES/CVaR" + # get the names of the objectives + if(!((length(objnames) == 2) & any(objnames %in% c("ETL", "ES", "CVaR")) & ("mean" %in% objnames))){ + stop("The portfolio object must have both 'mean' and 'var' specified as objectives") + } + # get the index number of the etl objective + etl_idx <- which(objnames %in% c("ETL", "ES", "CVaR")) + # get the index number of the mean objective + mean_idx <- which(objnames == "mean") + + # create a temporary portfolio to find the max mean return + ret_obj <- return_objective(name="mean") + tportf <- insert_objectives(portfolio, list(ret_obj)) + + # run the optimization to get the maximum return + tmp <- optimize.portfolio(R=R, portfolio=tportf, optimize_method="ROI") + maxret <- extractObjectiveMeasures(tmp)$mean + + # run the optimization to get the return at the min ETL portfolio + tmp <- optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI") + stats <- extractStats(tmp) + minret <- stats["mean"] + + # length.out is the number of portfolios to create + ret_seq <- seq(from=minret, to=maxret, length.out=n.portfolios) + + out <- matrix(0, nrow=length(ret_seq), ncol=length(extractStats(tmp))) + + for(i in 1:length(ret_seq)){ + portfolio$objectives[[mean_idx]]$target <- ret_seq[i] + out[i, ] <- extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) + } + colnames(out) <- names(stats) + return(out) +} + Added: pkg/PortfolioAnalytics/man/meanetl.efficient.frontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/meanetl.efficient.frontier.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/meanetl.efficient.frontier.Rd 2013-08-21 22:25:10 UTC (rev 2847) @@ -0,0 +1,32 @@ +\name{meanetl.efficient.frontier} +\alias{meanetl.efficient.frontier} +\title{Generate the efficient frontier for a mean-etl portfolio} +\usage{ + meanetl.efficient.frontier(portfolio, R, + n.portfolios = 25) +} +\arguments{ + \item{portfolio}{a portfolio object with constraints and + objectives created via \code{\link{portfolio.spec}}} + + \item{R}{an xts or matrix of asset returns} + + \item{n.portfolios}{number of portfolios to plot along + the efficient frontier} +} +\value{ + a matrix of objective measure values and weights along + the efficient frontier +} +\description{ + This function generates the mean-etl efficient frontier + of a portfolio specifying constraints and objectives. To + generate the mean-var efficient frontier, the portfolio + must have two objectives 1) "mean" and 2) "ETL/ES/CVaR". + If the only objective in the \code{portfolio} object is + ETL/ES/CVaR, the we will add a mean objective. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Thu Aug 22 00:32:48 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 00:32:48 +0200 (CEST) Subject: [Returnanalytics-commits] r2848 - pkg/PortfolioAnalytics/R Message-ID: <20130821223248.DDD3018588F@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-22 00:32:48 +0200 (Thu, 22 Aug 2013) New Revision: 2848 Modified: pkg/PortfolioAnalytics/R/moment.functions.R Log: adding ETL and mETL in the switch statement for set.portfolio.moments Modified: pkg/PortfolioAnalytics/R/moment.functions.R =================================================================== --- pkg/PortfolioAnalytics/R/moment.functions.R 2013-08-21 22:25:10 UTC (rev 2847) +++ pkg/PortfolioAnalytics/R/moment.functions.R 2013-08-21 22:32:48 UTC (rev 2848) @@ -214,6 +214,8 @@ mES =, CVaR =, cVaR =, + ETL=, + mETL=, ES = { if(is.null(momentargs$mu)) momentargs$mu = matrix( as.vector(apply(R,2,'mean')),ncol=1); if(is.null(momentargs$sigma)) momentargs$sigma = cov(R) From noreply at r-forge.r-project.org Thu Aug 22 01:28:47 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 01:28:47 +0200 (CEST) Subject: [Returnanalytics-commits] r2849 - pkg/PortfolioAnalytics/R Message-ID: <20130821232847.3CEC6185C55@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-22 01:28:46 +0200 (Thu, 22 Aug 2013) New Revision: 2849 Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R Log: Adding checks to mean-etl and mean-var efficient frontier functions. Check the objectives passed in and add appropriate objectives as needed. Check if return_constraint passed in and disable. Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 22:32:48 UTC (rev 2848) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-21 23:28:46 UTC (rev 2849) @@ -94,12 +94,33 @@ # step 3: 'step' along the returns and run the optimization to calculate # the weights and objective measures along the efficient frontier - # for a mean-var efficient frontier, there must be two objectives 1) "mean" and 2) "var" # get the names of the objectives objnames <- unlist(lapply(portfolio$objectives, function(x) x$name)) + + if(length(objnames) == 1){ + if(objnames == "mean"){ + # The user has only passed in a mean objective, add a var objective to the portfolio + portfolio <- add.objective(portfolio=portfolio, type="risk", name="var") + } else if(objnames == "var"){ + # The user has only passed in a var objective, add a mean objective + portfolio <- add.objective(portfolio=portfolio, type="return", name="mean") + } + # get the objective names again after we add an objective to the portfolio + objnames <- unlist(lapply(portfolio$objectives, function(x) x$name)) + } + + # for a mean-var efficient frontier, there must be two objectives 1) "mean" and 2) "var" if(!((length(objnames) == 2) & ("var" %in% objnames) & ("mean" %in% objnames))){ stop("The portfolio object must have both 'mean' and 'var' specified as objectives") } + + # If the user has passed in a portfolio object with return_constraint, we need to disable it + for(i in 1:length(portfolio$constraints)){ + if(inherits(portfolio$constraints[[i]], "return_constraint")){ + portfolio$constraints[[i]]$enabled <- FALSE + } + } + # get the index number of the var objective var_idx <- which(unlist(lapply(portfolio$objectives, function(x) x$name)) == "var") # get the index number of the mean objective @@ -154,10 +175,14 @@ objnames <- unlist(lapply(portfolio$objectives, function(x) x$name)) - # The user might pass in a portfolio with only ES/ETL/CVaR as an objective - if(length(objnames) == 1 & objnames %in% c("ETL", "ES", "CVaR")){ - # Add the mean objective to the portfolio - portfolio <- add.objective(portfolio=portfolio, type="return", name="mean") + if(length(objnames) == 1){ + if(objnames == "mean"){ + # The user has only passed in a mean objective, add ES objective to the portfolio + portfolio <- add.objective(portfolio=portfolio, type="risk", name="ES") + } else if(objnames %in% c("ETL", "ES", "CVaR")){ + # The user has only passed in ETL/ES/CVaR objective, add a mean objective + portfolio <- add.objective(portfolio=portfolio, type="return", name="mean") + } # get the objective names again after we add an objective to the portfolio objnames <- unlist(lapply(portfolio$objectives, function(x) x$name)) } @@ -167,6 +192,14 @@ if(!((length(objnames) == 2) & any(objnames %in% c("ETL", "ES", "CVaR")) & ("mean" %in% objnames))){ stop("The portfolio object must have both 'mean' and 'var' specified as objectives") } + + # If the user has passed in a portfolio object with return_constraint, we need to disable it + for(i in 1:length(portfolio$constraints)){ + if(inherits(portfolio$constraints[[i]], "return_constraint")){ + portfolio$constraints[[i]]$enabled <- FALSE + } + } + # get the index number of the etl objective etl_idx <- which(objnames %in% c("ETL", "ES", "CVaR")) # get the index number of the mean objective From noreply at r-forge.r-project.org Thu Aug 22 12:48:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 12:48:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2850 - pkg/PerformanceAnalytics/sandbox/pulkit/R Message-ID: <20130822104835.7E1F6185485@r-forge.r-project.org> Author: pulkit Date: 2013-08-22 12:48:35 +0200 (Thu, 22 Aug 2013) New Revision: 2850 Added: pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R Log: modified GPD function from pot package Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-21 23:28:46 UTC (rev 2849) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-22 10:48:35 UTC (rev 2850) @@ -1,14 +1,14 @@ #'@title #'Modelling Drawdown using Extreme Value Theory #' -#'@description +#"@description #'It has been shown empirically that Drawdowns can be modelled using Modified Generalized Pareto #'distribution(MGPD), Generalized Pareto Distribution(GPD) and other particular cases of MGPD such #'as weibull distribution \eqn{MGPD(\gamma,0,\psi)} and unit exponential distribution\eqn{MGPD(1,0,\psi)} #' #' Modified Generalized Pareto Distribution is given by the following formula #' -#' \deqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} +#' \dqeqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} #' #' Here \eqn{\gamma{\epsilon}R} is the modifying parameter. When \eqn{\gamma<1} the corresponding densities are #' strictly decreasing with heavier tail; the GDP is recovered by setting \eqn{\gamma = 1} .\eqn{\gamma \textgreater 1} @@ -29,13 +29,12 @@ #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset return #' @param type The type of distribution "gpd","pd","weibull" #' @param threshold The threshold beyond which the drawdowns have to be modelled -#' -#'@author Pulkit Mehrotra +#' #'@references #'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). Coppead Working Paper Series No. 359. #'Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. #' -#'@export +#' DrawdownGPD<-function(R,type=c("gpd","pd","weibull"),threshold=0.90){ x = checkData(R) columns = ncol(R) @@ -43,10 +42,10 @@ type = type[1] dr = -Drawdowns(R) dr_sorted = sort(as.vector(dr)) - data = dr_sorted[(0.9*nrow(R)):nrow(R)] + #data = dr_sorted[(0.9*nrow(R)):nrow(R)] if(type=="gpd"){ - gpd = fitgpd(data) - return(gpd) + gpd_fit = gpd(dr_sorted,dr_sorted[threshold*nrow(R)]) + return(gpd_fit) } if(type=="wiebull"){ weibull = fitdistr(data,"weibull") Added: pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R 2013-08-22 10:48:35 UTC (rev 2850) @@ -0,0 +1,172 @@ +## This function comes from the package "POT" . The gpd function +## corresponds to the gpdmle function. So, I'm very gratefull to Mathieu Ribatet. + +gpd <- function(x, threshold, start, ..., + std.err.type = "observed", corr = FALSE, + method = "BFGS", warn.inf = TRUE){ + + if (all(c("observed", "expected", "none") != std.err.type)) + stop("``std.err.type'' must be one of 'observed', 'expected' or 'none'") + + nlpot <- function(scale, shape) { + -.C("gpdlik", exceed, nat, threshold, scale, + shape, dns = double(1), PACKAGE = "POT")$dns + } + + nn <- length(x) + + threshold <- rep(threshold, length.out = nn) + + high <- (x > threshold) & !is.na(x) + threshold <- as.double(threshold[high]) + exceed <- as.double(x[high]) + nat <- length(exceed) + + if(!nat) stop("no data above threshold") + + pat <- nat/nn + param <- c("scale", "shape") + + if(missing(start)) { + + start <- list(scale = 0, shape = 0) + start$scale <- mean(exceed) - min(threshold) + + start <- start[!(param %in% names(list(...)))] + + } + + if(!is.list(start)) + stop("`start' must be a named list") + + if(!length(start)) + stop("there are no parameters left to maximize over") + + nm <- names(start) + l <- length(nm) + f <- formals(nlpot) + names(f) <- param + m <- match(nm, param) + + if(any(is.na(m))) + stop("`start' specifies unknown arguments") + + formals(nlpot) <- c(f[m], f[-m]) + nllh <- function(p, ...) nlpot(p, ...) + + if(l > 1) + body(nllh) <- parse(text = paste("nlpot(", paste("p[",1:l, + "]", collapse = ", "), ", ...)")) + + fixed.param <- list(...)[names(list(...)) %in% param] + + if(any(!(param %in% c(nm,names(fixed.param))))) + stop("unspecified parameters") + + start.arg <- c(list(p = unlist(start)), fixed.param) + if( warn.inf && do.call("nllh", start.arg) == 1e6 ) + warning("negative log-likelihood is infinite at starting values") + + opt <- optim(start, nllh, hessian = TRUE, ..., method = method) + + if ((opt$convergence != 0) || (opt$value == 1e6)) { + warning("optimization may not have succeeded") + if(opt$convergence == 1) opt$convergence <- "iteration limit reached" + } + + else opt$convergence <- "successful" + + if (std.err.type != "none"){ + + tol <- .Machine$double.eps^0.5 + + if(std.err.type == "observed") { + + var.cov <- qr(opt$hessian, tol = tol) + if(var.cov$rank != ncol(var.cov$qr)){ + warning("observed information matrix is singular; passing std.err.type to ``expected''") + obs.fish <- FALSE + return + } + + if (std.err.type == "observed"){ + var.cov <- try(solve(var.cov, tol = tol), silent = TRUE) + + if(!is.matrix(var.cov)){ + warning("observed information matrix is singular; passing std.err.type to ''none''") + std.err.type <- "expected" + return + } + + else{ + std.err <- diag(var.cov) + if(any(std.err <= 0)){ + warning("observed information matrix is singular; passing std.err.type to ``expected''") + std.err.type <- "expected" + return + } + + std.err <- sqrt(std.err) + + if(corr) { + .mat <- diag(1/std.err, nrow = length(std.err)) + corr.mat <- structure(.mat %*% var.cov %*% .mat, dimnames = list(nm,nm)) + diag(corr.mat) <- rep(1, length(std.err)) + } + else { + corr.mat <- NULL + } + } + } + } + + if (std.err.type == "expected"){ + + shape <- opt$par[2] + scale <- opt$par[1] + a22 <- 2/((1+shape)*(1+2*shape)) + a12 <- 1/(scale*(1+shape)*(1+2*shape)) + a11 <- 1/((scale^2)*(1+2*shape)) + ##Expected Matix of Information of Fisher + expFisher <- nat * matrix(c(a11,a12,a12,a22),nrow=2) + + expFisher <- qr(expFisher, tol = tol) + var.cov <- solve(expFisher, tol = tol) + std.err <- sqrt(diag(var.cov)) + + if(corr) { + .mat <- diag(1/std.err, nrow = length(std.err)) + corr.mat <- structure(.mat %*% var.cov %*% .mat, dimnames = list(nm,nm)) + diag(corr.mat) <- rep(1, length(std.err)) + } + else + corr.mat <- NULL + } + + colnames(var.cov) <- nm + rownames(var.cov) <- nm + names(std.err) <- nm + } + + else{ + std.err <- std.err.type <- corr.mat <- NULL + var.cov <- NULL + } + + + param <- c(opt$par, unlist(fixed.param)) + scale <- param["scale"] + + var.thresh <- !all(threshold == threshold[1]) + + if (!var.thresh) + threshold <- threshold[1] + + list(fitted.values = opt$par, std.err = std.err, std.err.type = std.err.type, + var.cov = var.cov, fixed = unlist(fixed.param), param = param, + deviance = 2*opt$value, corr = corr.mat, convergence = opt$convergence, + counts = opt$counts, message = opt$message, threshold = threshold, + nat = nat, pat = pat, data = x, exceed = exceed, scale = scale, + var.thresh = var.thresh, est = "MLE", logLik = -opt$value, + opt.value = opt$value, hessian = opt$hessian) +} From noreply at r-forge.r-project.org Thu Aug 22 13:23:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 13:23:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2851 - in pkg/Meucci: . R demo man Message-ID: <20130822112335.E1853184C51@r-forge.r-project.org> Author: xavierv Date: 2013-08-22 13:23:35 +0200 (Thu, 22 Aug 2013) New Revision: 2851 Added: pkg/Meucci/R/MaxRsqCS.R pkg/Meucci/demo/S_CrossSectionConstrainedIndustries.R pkg/Meucci/man/MaxRsqCS.Rd Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/NAMESPACE Log: - added S_CrossSectionConstrainedIndustries demo script from chapter 3 and its associated functions Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-22 10:48:35 UTC (rev 2850) +++ pkg/Meucci/DESCRIPTION 2013-08-22 11:23:35 UTC (rev 2851) @@ -90,3 +90,4 @@ 'CovertCompoundedReturns2Price.R' ' FitOrnsteinUhlenbeck.R' + 'MaxRsqCS.R' Modified: pkg/Meucci/NAMESPACE =================================================================== --- pkg/Meucci/NAMESPACE 2013-08-22 10:48:35 UTC (rev 2850) +++ pkg/Meucci/NAMESPACE 2013-08-22 11:23:35 UTC (rev 2851) @@ -25,6 +25,7 @@ export(LognormalCopulaPdf) export(LognormalMoments2Parameters) export(LognormalParam2Statistics) +export(MaxRsqCS) export(MleRecursionForStudentT) export(MvnRnd) export(NoisyObservations) Added: pkg/Meucci/R/MaxRsqCS.R =================================================================== --- pkg/Meucci/R/MaxRsqCS.R (rev 0) +++ pkg/Meucci/R/MaxRsqCS.R 2013-08-22 11:23:35 UTC (rev 2851) @@ -0,0 +1,117 @@ +#' Solve for G that maximises sample r-square of X*G'*B' with X under constraints A*G<=D +#' and Aeq*G=Deq (A,D, Aeq,Deq conformable matrices),as described in A. Meucci, +#' "Risk and Asset Allocation", Springer, 2005. +#' +#' @param X : [matrix] (T x N) +#' @param B : [matrix] (T x K) +#' @param W : [matrix] (N x N) +#' @param A : [matrix] linear inequality constraints +#' @param D : [matrix] +#' @param Aeq : [matrix] linear equality constraints +#' @param Deq : [matrix] +#' @param lb : [vector] lower bound +#' @param ub : [vector] upper bound +#' +#' @return G : [matrix] (N x K) +#' +#' @note +#' Initial code by Tai-Ho Wang +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "MaxRsqCS.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +MaxRsqCS = function(X, B, W, A = NULL, D = NULL, Aeq = NULL, Deq, lb = NULL, ub = NULL) +{ + N = ncol(X); + K = ncol(B); + + # compute sample estimates + Sigma_X = (dim(X)[1]-1)/dim(X)[1] * cov(X); + + + # restructure for feeding to quadprog + Phi = t(W) %*% W; + + # restructure the linear term of the objective function + FirstDegree = matrix( Sigma_X %*% Phi %*% B, K * N, ); + + # restructure the quadratic term of the objective function + SecondDegree = Sigma_X; + + for( k in 1 : (N - 1) ) + { + SecondDegree = blkdiag(SecondDegree, Sigma_X); + } + + SecondDegree = t( kron( sqrt(Phi) %*% B, diag( 1, N ) ) ) %*% SecondDegree %*% kron( sqrt( Phi ) %*% B, diag( 1, N ) ); + + # restructure the equality constraints + if( !length(Aeq) ) + { + AEq = Aeq; + }else + { + AEq = blkdiag(Aeq); + for( k in 2 : K ) + { + AEq = blkdiag(AEq, Aeq); + } + } + + Deq = matrix( Deq, , 1); + + # resturcture the inequality constraints + if( length(A) ) + { + AA = NULL + for( k in 1 : N ) + { + AA = cbind( AA, kron(diag( 1, K ), A[ k ] ) ); ##ok + } + }else + { + AA = A; + } + + if( length(D)) + { + D = matrix( D, , 1 ); + } + + # restructure upper and lower bounds + if( length(lb) ) + { + lb = matrix( lb, K * N, 1 ); + } + + if( length(ub) ) + { + ub = matrix( ub, K * N, 1 ); + } + + # initial guess + x0 = matrix( 1, K * N, 1 ); + if(length(AA)) + { + AA = ( AA + t(AA) ) / 2; # robustify + + } + + Amat = rbind( AEq, AA); + bvec = c( Deq, D ); + + # solve the constrained generlized r-square problem by quadprog + #options = optimset('LargeScale', 'off', 'MaxIter', 2000, 'Display', 'none'); + + + b = ipop( c = matrix( FirstDegree ), H = SecondDegree, A = Amat, b = bvec, l = lb , u = ub , r = rep(0, length(bvec)) ) + + # reshape for output + G = t( matrix( attributes(b)$primal, N, ) ); + + return( G ); +} \ No newline at end of file Added: pkg/Meucci/demo/S_CrossSectionConstrainedIndustries.R =================================================================== --- pkg/Meucci/demo/S_CrossSectionConstrainedIndustries.R (rev 0) +++ pkg/Meucci/demo/S_CrossSectionConstrainedIndustries.R 2013-08-22 11:23:35 UTC (rev 2851) @@ -0,0 +1,89 @@ + +################################################################################################################## +### This script fits a cross-sectional linear factor model creating industry factors, +### +### == Chapter 3 == +################################################################################################################## +#' This script fits a cross-sectional linear factor model creating industry factors, where the industry factors +#' are constrained to be uncorrelated with the market as described in A. Meucci, "Risk and Asset Allocation", +#' Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_CrossSectionConstrainedIndustries.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Loads weekly stock returns X and indices stock returns F +load("../data/securitiesTS.Rda"); +Data_Securities = securitiesTS$data[ , -1 ]; # 1st column is date + +load("../data/securitiesIndustryClassification.Rda"); +Securities_IndustryClassification = securitiesIndustryClassification$data; +################################################################################################################## +### Linear returns for stocks +X = diff( Data_Securities ) / Data_Securities[ -nrow(Data_Securities), ]; +X = X[ ,1:30 ]; # consider first stocks only to lower run time (comment this line otherwise) + +T = dim(X)[1]; +N = dim(X)[2]; +B = Securities_IndustryClassification[ 1:N, ]; +K = dim(B)[ 2 ]; +m = array( 1/N, N ); + +# compute sample estimates +E_X = matrix( apply(X, 2, mean ) ); +Sigma_X = (dim(X)[1]-1)/dim(X)[1] * cov(X); + +# The optimal loadings turn out to be the standard multivariate weighted-OLS. +Phi = diag(1 / diag( Sigma_X ), length(diag( Sigma_X ) ) ); +W = sqrt( Phi ); + +################################################################################################################## +### Solve for G that maximises sample r-square of X*G'*B' with X +### under constraints A*G<=D and Aeq*G=Deq (A,D, Aeq,Deq conformable matrices) +A = NULL; +Aeq = t(m) %*% t(Sigma_X); +Deq = matrix( 0, K, 1 ); +#BOUNDARIES +lb = -100 +ub = 700 +G = MaxRsqCS(X, B, W, A, D, Aeq, Deq, lb, ub); + +# compute intercept a and residual U +F = X %*% t(G); +E_F = matrix(apply(F, 2, mean)); +a = E_X - B %*% E_F; +A_ = repmat(t(a), T, 1); +U = X - A_ - F %*% t(B); + +################################################################################################################## +### Residual analysis + +M = cbind( U, F ); +SigmaJoint_UF = (dim(M)[1]-1)/dim(M)[1] * cov( M ); +CorrJoint_UF = cov2cor(SigmaJoint_UF); +Sigma_F = (dim(F)[1]-1)/dim(F)[1] * cov(F); +Corr_F = cov2cor( Sigma_F ); +Corr_F = tril(Corr_F, -1); +Corr_F = Corr_F[ Corr_F != 0 ]; + +Corr_U = tril(CorrJoint_UF[ 1:N, 1:N ], -1); +Corr_U = Corr_U[ Corr_U != 0 ]; +mean_Corr_U = mean( abs(Corr_U)); +max_Corr_U = max( abs(Corr_U)); +disp(mean_Corr_U); +disp(max_Corr_U); + +dev.new(); +hist(Corr_U, 100); + +Corr_UF = CorrJoint_UF[ 1:N, (N+1):(N+K) ]; +mean_Corr_UF = mean( abs(Corr_UF ) ); +max_Corr_UF = max( abs(Corr_UF ) ); +disp(mean_Corr_U); +disp(max_Corr_U); + +dev.new(); +hist(Corr_UF, 100); \ No newline at end of file Added: pkg/Meucci/man/MaxRsqCS.Rd =================================================================== --- pkg/Meucci/man/MaxRsqCS.Rd (rev 0) +++ pkg/Meucci/man/MaxRsqCS.Rd 2013-08-22 11:23:35 UTC (rev 2851) @@ -0,0 +1,48 @@ +\name{MaxRsqCS} +\alias{MaxRsqCS} +\title{Solve for G that maximises sample r-square of X*G'*B' with X under constraints A*G<=D +and Aeq*G=Deq (A,D, Aeq,Deq conformable matrices),as described in A. Meucci, +"Risk and Asset Allocation", Springer, 2005.} +\usage{ + MaxRsqCS(X, B, W, A = NULL, D = NULL, Aeq = NULL, Deq, + lb = NULL, ub = NULL) +} +\arguments{ + \item{X}{: [matrix] (T x N)} + + \item{B}{: [matrix] (T x K)} + + \item{W}{: [matrix] (N x N)} + + \item{A}{: [matrix] linear inequality constraints} + + \item{D}{: [matrix]} + + \item{Aeq}{: [matrix] linear equality constraints} + + \item{Deq}{: [matrix]} + + \item{lb}{: [vector] lower bound} + + \item{ub}{: [vector] upper bound} +} +\value{ + G : [matrix] (N x K) +} +\description{ + Solve for G that maximises sample r-square of X*G'*B' + with X under constraints A*G<=D and Aeq*G=Deq (A,D, + Aeq,Deq conformable matrices),as described in A. Meucci, + "Risk and Asset Allocation", Springer, 2005. +} +\note{ + Initial code by Tai-Ho Wang +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "MaxRsqCS.m" +} + From noreply at r-forge.r-project.org Thu Aug 22 13:25:45 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 13:25:45 +0200 (CEST) Subject: [Returnanalytics-commits] r2852 - pkg/Meucci Message-ID: <20130822112545.14881184C51@r-forge.r-project.org> Author: xavierv Date: 2013-08-22 13:25:44 +0200 (Thu, 22 Aug 2013) New Revision: 2852 Modified: pkg/Meucci/DESCRIPTION Log: - added kernlab package to requirements Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-22 11:23:35 UTC (rev 2851) +++ pkg/Meucci/DESCRIPTION 2013-08-22 11:25:44 UTC (rev 2852) @@ -33,7 +33,8 @@ R.utils, mvtnorm, dlm, - quadprog + quadprog, + kernlab Suggests: limSolve, Matrix, From noreply at r-forge.r-project.org Thu Aug 22 15:02:02 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 15:02:02 +0200 (CEST) Subject: [Returnanalytics-commits] r2853 - in pkg/PerformanceAnalytics: . R man Message-ID: <20130822130202.9C007184C2B@r-forge.r-project.org> Author: braverock Date: 2013-08-22 15:02:02 +0200 (Thu, 22 Aug 2013) New Revision: 2853 Modified: pkg/PerformanceAnalytics/DESCRIPTION pkg/PerformanceAnalytics/NAMESPACE pkg/PerformanceAnalytics/R/ActivePremium.R pkg/PerformanceAnalytics/R/table.CAPM.R pkg/PerformanceAnalytics/man/ActivePremium.Rd pkg/PerformanceAnalytics/man/table.CAPM.Rd Log: - fix alias problems that were breaking R CMD check - bump version Modified: pkg/PerformanceAnalytics/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/DESCRIPTION 2013-08-22 11:25:44 UTC (rev 2852) +++ pkg/PerformanceAnalytics/DESCRIPTION 2013-08-22 13:02:02 UTC (rev 2853) @@ -1,7 +1,7 @@ Package: PerformanceAnalytics Type: Package Title: Econometric tools for performance and risk analysis. -Version: 1.1.0 +Version: 1.1.1 Date: $Date$ Author: Peter Carl, Brian G. Peterson Maintainer: Brian G. Peterson Modified: pkg/PerformanceAnalytics/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/NAMESPACE 2013-08-22 11:25:44 UTC (rev 2852) +++ pkg/PerformanceAnalytics/NAMESPACE 2013-08-22 13:02:02 UTC (rev 2853) @@ -127,6 +127,7 @@ table.Autocorrelation, table.CalendarReturns, table.CAPM, + table.SFM, table.CaptureRatios, table.Correlation, table.Distributions, Modified: pkg/PerformanceAnalytics/R/ActivePremium.R =================================================================== --- pkg/PerformanceAnalytics/R/ActivePremium.R 2013-08-22 11:25:44 UTC (rev 2852) +++ pkg/PerformanceAnalytics/R/ActivePremium.R 2013-08-22 13:02:02 UTC (rev 2853) @@ -1,13 +1,13 @@ #' Active Premium or Active Return -#' +#' #' The return on an investment's annualized return minus the benchmark's #' annualized return. -#' +#' #' Active Premium = Investment's annualized return - Benchmark's annualized #' return -#' +#' #' Also commonly referred to as 'active return'. -#' +#' #' @param Ra return vector of the portfolio #' @param Rb return vector of the benchmark asset #' @param scale number of periods in a year (daily scale = 252, monthly scale = @@ -19,15 +19,17 @@ #' Management},Fall 1994, 49-58. #' @keywords ts multivariate distribution models #' @examples -#' +#' #' data(managers) #' ActivePremium(managers[, "HAM1", drop=FALSE], managers[, "SP500 TR", drop=FALSE]) -#' ActivePremium(managers[,1,drop=FALSE], managers[,8,drop=FALSE]) -#' ActivePremium(managers[,1:6], managers[,8,drop=FALSE]) +#' ActivePremium(managers[,1,drop=FALSE], managers[,8,drop=FALSE]) +#' ActivePremium(managers[,1:6], managers[,8,drop=FALSE]) #' ActivePremium(managers[,1:6], managers[,8:7,drop=FALSE]) #' @rdname ActivePremium -#' @aliases ActivePremium, ActiveReturn -#' @export +#' @aliases +#' ActivePremium +#' ActiveReturn +#' @export ActiveReturn <- ActivePremium <- function (Ra, Rb, scale = NA) { # @author Peter Carl @@ -35,7 +37,7 @@ Ra = checkData(Ra) Rb = checkData(Rb) - Ra.ncols = NCOL(Ra) + Ra.ncols = NCOL(Ra) Rb.ncols = NCOL(Rb) pairs = expand.grid(1:Ra.ncols, 1:Rb.ncols) Modified: pkg/PerformanceAnalytics/R/table.CAPM.R =================================================================== --- pkg/PerformanceAnalytics/R/table.CAPM.R 2013-08-22 11:25:44 UTC (rev 2852) +++ pkg/PerformanceAnalytics/R/table.CAPM.R 2013-08-22 13:02:02 UTC (rev 2853) @@ -1,11 +1,11 @@ #' Single Factor Asset-Pricing Model Summary: Statistics and Stylized Facts -#' +#' #' Takes a set of returns and relates them to a benchmark return. Provides a #' set of measures related to an excess return single factor model, or CAPM. -#' +#' #' This table will show statistics pertaining to an asset against a set of #' benchmarks, or statistics for a set of assets against a benchmark. -#' +#' #' @param Ra a vector of returns to test, e.g., the asset to be examined #' @param Rb a matrix, data.frame, or timeSeries of benchmark(s) to test the #' asset against. @@ -19,16 +19,18 @@ #' \code{\link{InformationRatio}} \cr \code{\link{TreynorRatio}} #' @keywords ts multivariate distribution models #' @examples -#' +#' #' data(managers) #' table.SFM(managers[,1:3,drop=FALSE], managers[,8,drop=FALSE], Rf = managers[,10,drop=FALSE]) -#' +#' #' result = table.SFM(managers[,1:3,drop=FALSE], managers[,8,drop=FALSE], Rf = managers[,10,drop=FALSE]) #' textplot(result, rmar = 0.8, cmar = 1.5, max.cex=.9, halign = "center", valign = "top", row.valign="center", wrap.rownames=15, wrap.colnames=10, mar = c(0,0,3,0)+0.1) #' title(main="Single Factor Model Related Statistics") -#' +#' #' @rdname table.CAPM -#' @aliases table.CAPM, table.SFM +#' @aliases +#' table.CAPM +#' table.SFM #' @export table.SFM <- table.CAPM <- function (Ra, Rb, scale = NA, Rf = 0, digits = 4) {# @author Peter Carl @@ -84,20 +86,20 @@ model.lm = lm(merged.assets[,1] ~ merged.assets[,2]) alpha = coef(model.lm)[[1]] beta = coef(model.lm)[[2]] - CAPMbull = CAPM.beta.bull(Ra[,column.a], Rb[,column.b],Rf) #inefficient, recalcs excess returns and intercept - CAPMbear = CAPM.beta.bear(Ra[,column.a], Rb[,column.b],Rf) #inefficient, recalcs excess returns and intercept + CAPMbull = CAPM.beta.bull(Ra[,column.a], Rb[,column.b],Rf) #inefficient, recalcs excess returns and intercept + CAPMbear = CAPM.beta.bear(Ra[,column.a], Rb[,column.b],Rf) #inefficient, recalcs excess returns and intercept htest = cor.test(merged.assets[,1], merged.assets[,2]) #active.premium = (Return.annualized(merged.assets[,1,drop=FALSE], scale = scale) - Return.annualized(merged.assets[,2,drop=FALSE], scale = scale)) active.premium = ActivePremium(Ra=Ra[,column.a],Rb=Rb[,column.b], scale = scale) #tracking.error = sqrt(sum(merged.assets[,1] - merged.assets[,2])^2/(length(merged.assets[,1])-1)) * sqrt(scale) - tracking.error = TrackingError(Ra[,column.a], Rb[,column.b],scale=scale) + tracking.error = TrackingError(Ra[,column.a], Rb[,column.b],scale=scale) #treynor.ratio = Return.annualized(merged.assets[,1,drop=FALSE], scale = scale)/beta treynor.ratio = TreynorRatio(Ra=Ra[,column.a], Rb=Rb[,column.b], Rf = Rf, scale = scale) - + z = c( alpha, beta, - CAPMbull, + CAPMbull, CAPMbear, summary(model.lm)$r.squared, ((1+alpha)^scale - 1), @@ -108,7 +110,7 @@ active.premium/tracking.error, treynor.ratio ) - + znames = c( "Alpha", "Beta", @@ -123,7 +125,7 @@ "Information Ratio", "Treynor Ratio" ) - + if(column.a == 1 & column.b == 1) { result.df = data.frame(Value = z, row.names = znames) colnames(result.df) = paste(columnnames.a[column.a], columnnames.b[column.b], sep = " to ") Modified: pkg/PerformanceAnalytics/man/ActivePremium.Rd =================================================================== --- pkg/PerformanceAnalytics/man/ActivePremium.Rd 2013-08-22 11:25:44 UTC (rev 2852) +++ pkg/PerformanceAnalytics/man/ActivePremium.Rd 2013-08-22 13:02:02 UTC (rev 2853) @@ -1,5 +1,5 @@ \name{ActiveReturn} -\alias{ActivePremium,} +\alias{ActivePremium} \alias{ActiveReturn} \title{Active Premium or Active Return} \usage{ Modified: pkg/PerformanceAnalytics/man/table.CAPM.Rd =================================================================== --- pkg/PerformanceAnalytics/man/table.CAPM.Rd 2013-08-22 11:25:44 UTC (rev 2852) +++ pkg/PerformanceAnalytics/man/table.CAPM.Rd 2013-08-22 13:02:02 UTC (rev 2853) @@ -1,5 +1,5 @@ \name{table.SFM} -\alias{table.CAPM,} +\alias{table.CAPM} \alias{table.SFM} \title{Single Factor Asset-Pricing Model Summary: Statistics and Stylized Facts} \usage{ From noreply at r-forge.r-project.org Thu Aug 22 17:36:03 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 17:36:03 +0200 (CEST) Subject: [Returnanalytics-commits] r2854 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130822153603.CD88C184EB5@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-22 17:36:03 +0200 (Thu, 22 Aug 2013) New Revision: 2854 Added: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.ROI.Rd Modified: pkg/PortfolioAnalytics/DESCRIPTION pkg/PortfolioAnalytics/R/applyFUN.R Log: adding chart.Efficient.Frontier function for optimize.portfolio.ROI objects Modified: pkg/PortfolioAnalytics/DESCRIPTION =================================================================== --- pkg/PortfolioAnalytics/DESCRIPTION 2013-08-22 13:02:02 UTC (rev 2853) +++ pkg/PortfolioAnalytics/DESCRIPTION 2013-08-22 15:36:03 UTC (rev 2854) @@ -52,3 +52,4 @@ 'charts.GenSA.R' 'chart.Weights.R' 'chart.RiskReward.R' + 'charts.efficient.frontier.R' Modified: pkg/PortfolioAnalytics/R/applyFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-22 13:02:02 UTC (rev 2853) +++ pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-22 15:36:03 UTC (rev 2854) @@ -103,6 +103,11 @@ #fun = match.fun(mean) #nargs$x = R }, + var = { + return(as.numeric(apply(R, 2, var))) + #fun = match.fun(mean) + #nargs$x = R + }, sd =, StdDev = { fun = match.fun(StdDev) Added: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R (rev 0) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-22 15:36:03 UTC (rev 2854) @@ -0,0 +1,81 @@ + +#' chart the efficient frontier and risk-reward scatter plot of the assets +#' +#' This function charts the efficient frontier and risk-reward scatter plot of the assets. +#' The mean-var or mean-etl efficient frontier can be plotted for optimal +#' portfolio objects created by \code{optimize.portfolio}. +#' +#' If \code{match.col="var"}, the mean-variance efficient frontier is plotted. +#' +#' If \code{match.col="ETL"} (also "ES" or "CVaR"), the mean-etl efficient frontier is plotted. +#' +#' @param object optimal portfolio created by \code{\link{optimize.portfolio}} +#' @param match.col string name of column to use for risk (horizontal axis). +#' \code{match.col} must match the name of an objective in the \code{portfolio} +#' object. +#' +#' @param xlim set the x-axis limit, same as in \code{\link{plot}} +#' @param ylim set the y-axis limit, same as in \code{\link{plot}} +#' @param main a main title for the plot +#' @param ... passthrough parameters to \code{\link{plot}} +chart.Efficient.Frontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ + if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class optimize.portfolio.ROI") + + portf <- object$portfolio + R <- object$R + wts <- object$weights + objectclass <- class(object)[1] + + objnames <- unlist(lapply(portf$objectives, function(x) x$name)) + if(!(match.col %in% objnames)){ + stop("match.col must match an objective name") + } + + # get the optimal return and risk metrics + xtract <- extractStats(object=object) + columnames <- colnames(xtract) + if(!(("mean") %in% columnames)){ + # we need to calculate the mean given the optimal weights + opt_ret <- applyFUN(R=R, weights=wts, FUN="mean") + } else { + opt_ret <- xtract["mean"] + } + opt_risk <- xtract[match.col] + + # get the data to plot scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN="mean") + asset_risk <- scatterFUN(R=R, FUN=match.col) + rnames <- colnames(R) + + if(match.col %in% c("ETL", "ES", "CVaR")){ + frontier <- meanetl.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) + } + if(match.col %in% objnames){ + frontier <- meanvar.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) + } + # data points to plot the frontier + x.f <- frontier[, match.col] + y.f <- frontier[, "mean"] + + # set the x and y limits + if(is.null(xlim)){ + xlim <- range(c(x.f, asset_risk)) + } + if(is.null(ylim)){ + ylim <- range(c(y.f, asset_ret)) + } + + # plot a scatter of the assets + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, pch=5, axes=FALSE, ...) + text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + # plot the efficient line + lines(x=x.f, y=y.f, col="darkgray", lwd=2) + # plot the optimal portfolio + points(opt_risk, opt_ret, col="blue", pch=16) # optimal + text(x=opt_risk, y=opt_ret, labels="Optimal",col="blue", pos=4, cex=0.8) + axis(1, cex.axis = cex.axis, col = element.color) + axis(2, cex.axis = cex.axis, col = element.color) + box(col = element.color) +} + + Added: pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.ROI.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.ROI.Rd 2013-08-22 15:36:03 UTC (rev 2854) @@ -0,0 +1,42 @@ +\name{chart.Efficient.Frontier.optimize.portfolio.ROI} +\alias{chart.Efficient.Frontier.optimize.portfolio.ROI} +\title{chart the efficient frontier and risk-reward scatter plot of the assets} +\usage{ + chart.Efficient.Frontier.optimize.portfolio.ROI(object, + match.col = "ES", n.portfolios = 25, xlim = NULL, + ylim = NULL, cex.axis = 0.8, + element.color = "darkgray", + main = "Efficient Frontier", ...) +} +\arguments{ + \item{object}{optimal portfolio created by + \code{\link{optimize.portfolio}}} + + \item{match.col}{string name of column to use for risk + (horizontal axis). \code{match.col} must match the name + of an objective in the \code{portfolio} object.} + + \item{xlim}{set the x-axis limit, same as in + \code{\link{plot}}} + + \item{ylim}{set the y-axis limit, same as in + \code{\link{plot}}} + + \item{main}{a main title for the plot} + + \item{...}{passthrough parameters to \code{\link{plot}}} +} +\description{ + This function charts the efficient frontier and + risk-reward scatter plot of the assets. The mean-var or + mean-etl efficient frontier can be plotted for optimal + portfolio objects created by \code{optimize.portfolio}. +} +\details{ + If \code{match.col="var"}, the mean-variance efficient + frontier is plotted. + + If \code{match.col="ETL"} (also "ES" or "CVaR"), the + mean-etl efficient frontier is plotted. +} + From noreply at r-forge.r-project.org Thu Aug 22 17:48:18 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 17:48:18 +0200 (CEST) Subject: [Returnanalytics-commits] r2855 - pkg/PerformanceAnalytics/sandbox/pulkit/R Message-ID: <20130822154818.1A880184EB5@r-forge.r-project.org> Author: pulkit Date: 2013-08-22 17:48:17 +0200 (Thu, 22 Aug 2013) New Revision: 2855 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R Log: Added weibull distribution Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-22 15:36:03 UTC (rev 2854) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-22 15:48:17 UTC (rev 2855) @@ -34,29 +34,27 @@ #'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). Coppead Working Paper Series No. 359. #'Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. #' -#' DrawdownGPD<-function(R,type=c("gpd","pd","weibull"),threshold=0.90){ x = checkData(R) columns = ncol(R) columnnames = colnames(R) type = type[1] dr = -Drawdowns(R) - dr_sorted = sort(as.vector(dr)) - #data = dr_sorted[(0.9*nrow(R)):nrow(R)] + data = sort(as.vector(dr)) + threshold = data[threshold*nrow(R)] if(type=="gpd"){ - gpd_fit = gpd(dr_sorted,dr_sorted[threshold*nrow(R)]) + gpd_fit = gpd(data,threshold) return(gpd_fit) } if(type=="wiebull"){ - weibull = fitdistr(data,"weibull") - return(weibull) - } - if(type=="pd"){ - scale = min(data) - shape = length(data)/(sum(log(data))-length(data)*log(a)) - } - - + # From package MASS + if(any( data<= 0)) stop("Weibull values must be > 0") + lx <- log(data) + m <- mean(lx); v <- var(lx) + shape <- 1.2/sqrt(v); scale <- exp(m + 0.572/shape) + result <- list(shape = shape, scale = scale) + return(result) + } } From noreply at r-forge.r-project.org Thu Aug 22 19:59:18 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 19:59:18 +0200 (CEST) Subject: [Returnanalytics-commits] r2856 - in pkg/PortfolioAnalytics: R vignettes Message-ID: <20130822175918.316461846AF@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-22 19:59:17 +0200 (Thu, 22 Aug 2013) New Revision: 2856 Modified: pkg/PortfolioAnalytics/R/applyFUN.R pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw pkg/PortfolioAnalytics/vignettes/ROI_vignette.pdf Log: Adding ETL and mETL to applyFUN. Correcting error in ROI_vignette. Modified: pkg/PortfolioAnalytics/R/applyFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-22 15:48:17 UTC (rev 2855) +++ pkg/PortfolioAnalytics/R/applyFUN.R 2013-08-22 17:59:17 UTC (rev 2856) @@ -44,6 +44,8 @@ mES =, CVaR =, cVaR =, + ETL=, + mETL=, ES = { fun = match.fun(ES) if(is.null(nargs$portfolio_method)) nargs$portfolio_method='single' Modified: pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw 2013-08-22 15:48:17 UTC (rev 2855) +++ pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw 2013-08-22 17:59:17 UTC (rev 2856) @@ -83,7 +83,7 @@ The next step is to run the optimization. Note that \code{optimize\_method="ROI"} is specified in the call to \code{optimize.portfolio} to select the solver used for the optimization. <<>>= # Run the optimization -opt_maxret <- optimize.portfolio(R=returns, portfolio=portf_maxret, optimize_method="ROI") +opt_maxret <- optimize.portfolio(R=returns, portfolio=portf_maxret, optimize_method="ROI", trace=TRUE) @ The print method for the \code{opt\_maxret} object shows the call, optimal weights, and the objective measure @@ -122,18 +122,23 @@ Volatility as the risk metric <>= -chart.Scatter.ROI(opt_maxret, R=returns,return.col="mean", risk.col="sd", main="Maximum Return") +chart.RiskReward(opt_maxret,return.col="mean", risk.col="sd", + chart.assets=TRUE, xlim=c(0.01, 0.05), main="Maximum Return") @ Expected tail loss as the risk metric <>= -chart.Scatter.ROI(opt_maxret, R=returns, return.col="mean", risk.col="ETL", main="Maximum Return", invert=FALSE, p=0.9) +chart.RiskReward(opt_maxret, return.col="mean", risk.col="ES", + chart.assets=TRUE, xlim=c(0.02, 0.18), main="Maximum Return") @ \subsection{Backtesting} An out of sample backtest is run with \code{optimize.portfolio.rebalancing}. In this example, an initial training period of 36 months is used and the portfolio is rebalanced quarterly. <<>>= -bt_maxret <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_maxret, optimize_method="ROI", rebalance_on="quarters", training_period=36, trace=TRUE) +bt_maxret <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_maxret, + optimize_method="ROI", + rebalance_on="quarters", + training_period=36) @ The \code{bt\_maxret} object is a list containing the optimal weights and objective measure at each rebalance period. @@ -168,13 +173,16 @@ <<>>= # Run the optimization opt_gmv <- optimize.portfolio(R=returns, portfolio=portf_minvar, - optimize_method="ROI") + optimize_method="ROI", trace=TRUE) print(opt_gmv) @ \subsubsection{Backtesting} <<>>= -bt_gmv <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_minvar, optimize_method="ROI", rebalance_on="quarters", training_period=36) +bt_gmv <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_minvar, + optimize_method="ROI", + rebalance_on="quarters", + training_period=36) @ @@ -184,7 +192,8 @@ Constraints can be added to the \code{portf\_minvar} portfolio object previously created. <<>>= # Add long only constraints -portf_minvar <- add.constraint(portfolio=portf_minvar, type="box", min=0, max=1) +portf_minvar <- add.constraint(portfolio=portf_minvar, type="box", + min=0, max=1) # Add group constraints portf_minvar <- add.constraint(portfolio=portf_minvar, @@ -197,13 +206,17 @@ \subsubsection{Optimization} <<>>= # Run the optimization -opt_minvar <- optimize.portfolio(R=returns, portfolio=portf_minvar, optimize_method="ROI") +opt_minvar <- optimize.portfolio(R=returns, portfolio=portf_minvar, + optimize_method="ROI", trace=TRUE) print(opt_minvar) @ \subsubsection{Backtesting} <<>>= -bt_minvar <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_minvar, optimize_method="ROI", rebalance_on="quarters", training_period=36) +bt_minvar <- optimize.portfolio.rebalancing(R=returns, portfolio=portf_minvar, + optimize_method="ROI", + rebalance_on="quarters", + training_period=36) @ \section{Maximizing Quadratic Utility} @@ -250,7 +263,8 @@ opt_qu <- optimize.portfolio(R=returns, portfolio=init_portf, constraints=qu_constr, objectives=qu_obj, - optimize_method="ROI") + optimize_method="ROI", + trace=TRUE) @ \subsection{Backtesting} Modified: pkg/PortfolioAnalytics/vignettes/ROI_vignette.pdf =================================================================== (Binary files differ) From noreply at r-forge.r-project.org Thu Aug 22 20:02:20 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 22 Aug 2013 20:02:20 +0200 (CEST) Subject: [Returnanalytics-commits] r2857 - pkg/PortfolioAnalytics/R Message-ID: <20130822180220.6EEBF1846AF@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-22 20:02:19 +0200 (Thu, 22 Aug 2013) New Revision: 2857 Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R Log: adding the optimal portfolio stats to the extract.efficient.frontier return object Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-22 17:59:17 UTC (rev 2856) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-22 18:02:19 UTC (rev 2857) @@ -53,6 +53,8 @@ xtract<-extractStats(object) columnnames=colnames(xtract) + # optimal portfolio stats from xtract + opt <- xtract[which.min(xtract[, "out"]),] #if("package:multicore" %in% search() || require("multicore",quietly = TRUE)){ # mclapply #} @@ -72,6 +74,8 @@ tmp<-tmp[which.max(tmp[,'mean']),] #tmp } + # combine the stats from the optimal portfolio to result matrix + result <- rbind(opt, result) return(result) } From noreply at r-forge.r-project.org Fri Aug 23 00:33:48 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 00:33:48 +0200 (CEST) Subject: [Returnanalytics-commits] r2858 - pkg/PortfolioAnalytics/R Message-ID: <20130822223348.86F3C185577@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-23 00:33:48 +0200 (Fri, 23 Aug 2013) New Revision: 2858 Modified: pkg/PortfolioAnalytics/R/charts.PSO.R pkg/PortfolioAnalytics/R/extractstats.R Log: Modifying extractStats for optimize.portfolio.pso objects to run constrained_objective on the normalized PSOoutput weights to get the objective_measures. Modifying charts.Scatter.pso to work with the modified extractStats function. Modified: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-22 18:02:19 UTC (rev 2857) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-22 22:33:48 UTC (rev 2858) @@ -67,40 +67,108 @@ #' @export chart.Scatter.pso <- function(object, neighbors=NULL, ..., return.col="mean", risk.col="ES", chart.assets=FALSE, element.color = "darkgray", cex.axis=0.8, xlim=NULL, ylim=NULL){ if(!inherits(object, "optimize.portfolio.pso")) stop("object must be of class 'optimize.portfolio.pso'") + R <- object$R - # Object with the "out" value in the first column and the normalized weights - # The first row is the optimal "out" value and the optimal weights - tmp <- extractStats(object) + # portfolio <- object$portfolio + xtract = extractStats(object) + columnnames = colnames(xtract) + #return.column = grep(paste("objective_measures",return.col,sep='.'),columnnames) + return.column = pmatch(return.col,columnnames) + if(is.na(return.column)) { + return.col = paste(return.col,return.col,sep='.') + return.column = pmatch(return.col,columnnames) + } + #risk.column = grep(paste("objective_measures",risk.col,sep='.'),columnnames) + risk.column = pmatch(risk.col,columnnames) + if(is.na(risk.column)) { + risk.col = paste(risk.col,risk.col,sep='.') + risk.column = pmatch(risk.col,columnnames) + } - # Get the weights - wts <- tmp[,-1] + # if(is.na(return.column) | is.na(risk.column)) stop(return.col,' or ',risk.col, ' do not match extractStats output') - returnpoints <- applyFUN(R=R, weights=wts, FUN=return.col, ...=...) - riskpoints <- applyFUN(R=R, weights=wts, FUN=risk.col, ...=...) - + # If the user has passed in return.col or risk.col that does not match extractStats output + # This will give the flexibility of passing in return or risk metrics that are not + # objective measures in the optimization. This may cause issues with the "neighbors" + # functionality since that is based on the "out" column + if(is.na(return.column) | is.na(risk.column)){ + return.col <- gsub("\\..*", "", return.col) + risk.col <- gsub("\\..*", "", risk.col) + warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') + # Get the matrix of weights for applyFUN + wts_index <- grep("w.", columnnames) + wts <- xtract[, wts_index] + if(is.na(return.column)){ + tmpret <- applyFUN(R=R, weights=wts, FUN=return.col) + xtract <- cbind(tmpret, xtract) + colnames(xtract)[which(colnames(xtract) == "tmpret")] <- return.col + } + if(is.na(risk.column)){ + tmprisk <- applyFUN(R=R, weights=wts, FUN=risk.col) + xtract <- cbind(tmprisk, xtract) + colnames(xtract)[which(colnames(xtract) == "tmprisk")] <- risk.col + } + columnnames = colnames(xtract) + return.column = pmatch(return.col,columnnames) + if(is.na(return.column)) { + return.col = paste(return.col,return.col,sep='.') + return.column = pmatch(return.col,columnnames) + } + risk.column = pmatch(risk.col,columnnames) + if(is.na(risk.column)) { + risk.col = paste(risk.col,risk.col,sep='.') + risk.column = pmatch(risk.col,columnnames) + } + } if(chart.assets){ # Include risk reward scatter of asset returns asset_ret <- scatterFUN(R=R, FUN=return.col, ...=...) asset_risk <- scatterFUN(R=R, FUN=risk.col, ...=...) rnames <- colnames(R) + xlim <- range(c(xtract[,risk.column], asset_risk)) + ylim <- range(c(xtract[,return.column], asset_ret)) } else { asset_ret <- NULL asset_risk <- NULL } - # get limits for x and y axis - if(is.null(ylim)){ - ylim <- range(returnpoints, asset_ret) + # plot the portfolios from PSOoutput + plot(xtract[,risk.column],xtract[,return.column], xlab=risk.col, ylab=return.col, col="darkgray", axes=FALSE, xlim=xlim, ylim=ylim, ...) + + ## @TODO: Generalize this to find column containing the "risk" metric + if(length(names(object)[which(names(object)=='constrained_objective')])) { + result.slot<-'constrained_objective' + } else { + result.slot<-'objective_measures' } - if(is.null(xlim)){ - xlim <- range(riskpoints, asset_risk) + objcols<-unlist(object[[result.slot]]) + names(objcols)<-PortfolioAnalytics:::name.replace(names(objcols)) + return.column = pmatch(return.col,names(objcols)) + if(is.na(return.column)) { + return.col = paste(return.col,return.col,sep='.') + return.column = pmatch(return.col,names(objcols)) } - - # plot the portfolios - plot(x=riskpoints, y=returnpoints, xlab=risk.col, ylab=return.col, xlim=xlim, ylim=ylim, col="darkgray", axes=FALSE, ...) - points(x=riskpoints[1], y=returnpoints[1], col="blue", pch=16) # optimal - text(x=riskpoints[1], y=returnpoints[1], labels="Optimal",col="blue", pos=4, cex=0.8) - + risk.column = pmatch(risk.col,names(objcols)) + if(is.na(risk.column)) { + risk.col = paste(risk.col,risk.col,sep='.') + risk.column = pmatch(risk.col,names(objcols)) + } + # risk and return metrics for the optimal weights if the RP object does not + # contain the metrics specified by return.col or risk.col + if(is.na(return.column) | is.na(risk.column)){ + return.col <- gsub("\\..*", "", return.col) + risk.col <- gsub("\\..*", "", risk.col) + # warning(return.col,' or ', risk.col, ' do not match extractStats output of $objective_measures slot') + opt_weights <- object$weights + ret <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=return.col)) + risk <- as.numeric(applyFUN(R=R, weights=opt_weights, FUN=risk.col)) + points(risk, ret, col="blue", pch=16) #optimal + text(x=risk, y=ret, labels="Optimal",col="blue", pos=4, cex=0.8) + } else { + points(objcols[risk.column], objcols[return.column], col="blue", pch=16) # optimal + text(x=objcols[risk.column], y=objcols[return.column], labels="Optimal",col="blue", pos=4, cex=0.8) + } + # plot the risk-reward scatter of the assets if(chart.assets){ points(x=asset_risk, y=asset_ret) Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-22 18:02:19 UTC (rev 2857) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-22 22:33:48 UTC (rev 2858) @@ -247,6 +247,8 @@ #' #' This function will extract the weights (swarm positions) from the PSO output #' and the out value (swarm fitness values) for each iteration of the optimization. +#' This function can be slow because we need to run \code{constrained_objective} +#' to calculate the objective measures on the weights. #' #' @param object list returned by optimize.portfolio #' @param prefix prefix to add to output row names @@ -259,6 +261,9 @@ # Check if object$PSOoutput is null, the user called optimize.portfolio with trace=FALSE if(is.null(object$PSOoutput)) stop("PSOoutput is null, trace=TRUE must be specified in optimize.portfolio") + R <- object$R + portfolio <- object$portfolio + normalize_weights <- function(weights){ # normalize results if necessary if(!is.null(constraints$min_sum) | !is.null(constraints$max_sum)){ @@ -305,8 +310,14 @@ # combine the optimal out value to the vector of out values tmpout <- c(object$out, tmpout) - result <- cbind(tmpout, psoweights) - colnames(result) <- c("out", paste('w',names(object$weights),sep='.')) + # run constrained_objective on the weights to get the objective measures in a matrix + stopifnot("package:foreach" %in% search() || suppressMessages(require("foreach",quietly = TRUE))) + obj <- foreach(i=1:nrow(psoweights), .inorder=TRUE, .combine=rbind, .errorhandling='remove') %dopar% { + unlist(constrained_objective(w=psoweights[i,], R=R, portfolio=portfolio, trace=TRUE)$objective_measures) + } + objnames <- name.replace(colnames(obj)) + result <- cbind(obj, tmpout, psoweights) + colnames(result) <- c(objnames, "out", paste('w',names(object$weights),sep='.')) rownames(result) <- paste(prefix, "pso.portf", index(tmpout), sep=".") return(result) } From noreply at r-forge.r-project.org Fri Aug 23 00:46:07 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 00:46:07 +0200 (CEST) Subject: [Returnanalytics-commits] r2859 - in pkg/PortfolioAnalytics: R man Message-ID: <20130822224607.28BA1185577@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-23 00:46:06 +0200 (Fri, 23 Aug 2013) New Revision: 2859 Added: pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.Rd Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.pso.Rd Log: Adding function to chart the efficient frontier for optimize.portfolio objects. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-22 22:33:48 UTC (rev 2858) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-22 22:46:06 UTC (rev 2859) @@ -78,4 +78,85 @@ box(col = element.color) } - +#' chart the efficient frontier and risk-reward scatter plot of the assets +#' +#' This function charts the efficient frontier and risk-reward scatter plot of the assets. +#' The efficient frontier plotted is based on the the trace information (sets of +#' portfolios tested by the solver at each iteration) in objects created by +#' \code{optimize.portfolio}. When running \code{optimize.portfolio}, +#' \code{trace=TRUE} must be specified. +#' +#' @param object optimal portfolio created by \code{\link{optimize.portfolio}} +#' @param match.col string name of column to use for risk (horizontal axis). +#' \code{match.col} must match the name of an objective in the \code{portfolio} +#' object. +#' @param xlim set the x-axis limit, same as in \code{\link{plot}} +#' @param ylim set the y-axis limit, same as in \code{\link{plot}} +#' @param main a main title for the plot +#' @param ... passthrough parameters to \code{\link{plot}} +chart.Efficient.Frontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ + # This function will work with objects of class optimize.portfolio.DEoptim, + # optimize.portfolio.random, and optimize.portfolio.pso + + if(inherits(object, "optimize.portfolio.GenSA")){ + stop("GenSA does not return any useable trace information for portfolios tested, thus we cannot extract an efficient portfolio") + } + + if(!inherits(object, "optimize.portfolio")) stop("object must be of class optimize.portfolio") + + portf <- object$portfolio + R <- object$R + if(is.null(R)) stop(paste("Not able to get asset returns from", object)) + wts <- object$weights + + # get the stats from the object + xtract <- extractStats(object=object) + columnames <- colnames(xtract) + + # Check if match.col is in extractStats output + if(!(match.col %in% columnames)){ + stop(paste(match.col, "is not a column in extractStats output")) + } + + # check if 'mean' is in extractStats output + if(!("mean" %in% columnames)){ + stop("mean is not a column in extractStats output") + } + + # get the stats of the optimal portfolio + optstats <- xtract[which.min(xtract[, "out"]), ] + opt_ret <- optstats["mean"] + opt_risk <- optstats[match.col] + + # get the data to plot scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN="mean") + asset_risk <- scatterFUN(R=R, FUN=match.col) + rnames <- colnames(R) + + # get the data of the efficient frontier + frontier <- extract.efficient.frontier(object=object, match.col=match.col) + + # data points to plot the frontier + x.f <- frontier[, match.col] + y.f <- frontier[, "mean"] + + # set the x and y limits + if(is.null(xlim)){ + xlim <- range(c(x.f, asset_risk)) + } + if(is.null(ylim)){ + ylim <- range(c(y.f, asset_ret)) + } + + # plot a scatter of the assets + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, pch=5, axes=FALSE, ...) + text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + # plot the efficient line + lines(x=x.f, y=y.f, col="darkgray", lwd=2) + # plot the optimal portfolio + points(opt_risk, opt_ret, col="blue", pch=16) # optimal + text(x=opt_risk, y=opt_ret, labels="Optimal",col="blue", pos=4, cex=0.8) + axis(1, cex.axis = cex.axis, col = element.color) + axis(2, cex.axis = cex.axis, col = element.color) + box(col = element.color) +} Added: pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.Rd 2013-08-22 22:46:06 UTC (rev 2859) @@ -0,0 +1,39 @@ +\name{chart.Efficient.Frontier.optimize.portfolio} +\alias{chart.Efficient.Frontier.optimize.portfolio} +\title{chart the efficient frontier and risk-reward scatter plot of the assets} +\usage{ + chart.Efficient.Frontier.optimize.portfolio(object, + match.col = "ES", n.portfolios = 25, xlim = NULL, + ylim = NULL, cex.axis = 0.8, + element.color = "darkgray", + main = "Efficient Frontier", ...) +} +\arguments{ + \item{object}{optimal portfolio created by + \code{\link{optimize.portfolio}}} + + \item{match.col}{string name of column to use for risk + (horizontal axis). \code{match.col} must match the name + of an objective in the \code{portfolio} object.} + + \item{xlim}{set the x-axis limit, same as in + \code{\link{plot}}} + + \item{ylim}{set the y-axis limit, same as in + \code{\link{plot}}} + + \item{main}{a main title for the plot} + + \item{...}{passthrough parameters to \code{\link{plot}}} +} +\description{ + This function charts the efficient frontier and + risk-reward scatter plot of the assets. The efficient + frontier plotted is based on the the trace information + (sets of portfolios tested by the solver at each + iteration) in objects created by + \code{optimize.portfolio}. When running + \code{optimize.portfolio}, \code{trace=TRUE} must be + specified. +} + Modified: pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.pso.Rd 2013-08-22 22:33:48 UTC (rev 2858) +++ pkg/PortfolioAnalytics/man/extractStats.optimize.portfolio.pso.Rd 2013-08-22 22:46:06 UTC (rev 2859) @@ -16,7 +16,10 @@ \description{ This function will extract the weights (swarm positions) from the PSO output and the out value (swarm fitness - values) for each iteration of the optimization. + values) for each iteration of the optimization. This + function can be slow because we need to run + \code{constrained_objective} to calculate the objective + measures on the weights. } \author{ Ross Bennett From noreply at r-forge.r-project.org Fri Aug 23 03:42:11 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 03:42:11 +0200 (CEST) Subject: [Returnanalytics-commits] r2860 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130823014211.9D61A185B57@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-23 03:42:10 +0200 (Fri, 23 Aug 2013) New Revision: 2860 Added: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd Removed: pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.ROI.Rd pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.efficient.frontier.R Log: Adding chart.EfficientFrontier as a generic function. Changed the name of the subfunctions to be consistent with naming conventions. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-22 22:46:06 UTC (rev 2859) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-23 01:42:10 UTC (rev 2860) @@ -3,6 +3,9 @@ export(applyFUN) export(box_constraint) export(CCCgarch.MM) +export(chart.EfficientFrontier.optimize.portfolio.ROI) +export(chart.EfficientFrontier.optimize.portfolio) +export(chart.EfficientFrontier) export(chart.RiskReward.optimize.portfolio.DEoptim) export(chart.RiskReward.optimize.portfolio.GenSA) export(chart.RiskReward.optimize.portfolio.pso) Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-22 22:46:06 UTC (rev 2859) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-23 01:42:10 UTC (rev 2860) @@ -1,28 +1,58 @@ -#' chart the efficient frontier and risk-reward scatter plot of the assets +#' chart the efficient frontier and risk-return scatter plot of the assets #' -#' This function charts the efficient frontier and risk-reward scatter plot of the assets. -#' The mean-var or mean-etl efficient frontier can be plotted for optimal -#' portfolio objects created by \code{optimize.portfolio}. +#' This function charts the efficient frontier and risk-return scatter plot of +#' the assets given an object created by \code{optimize.portfolio}. #' -#' If \code{match.col="var"}, the mean-variance efficient frontier is plotted. +#' For objects created by optimize.portfolio with 'DEoptim', 'random', or 'pso' +#' specified as the optimize_method: +#' \itemize{ +#' \item The efficient frontier plotted is based on the the trace information (sets of +#' portfolios tested by the solver at each iteration) in objects created by +#' \code{optimize.portfolio}. +#' } #' -#' If \code{match.col="ETL"} (also "ES" or "CVaR"), the mean-etl efficient frontier is plotted. +#' For objects created by optimize.portfolio with 'ROI' specified as the +#' optimize_method: +#' \itemize{ +#' \item The mean-var or mean-etl efficient frontier can be plotted for optimal +#' portfolio objects created by \code{optimize.portfolio}. #' +#' \item If \code{match.col="var"}, the mean-variance efficient frontier is plotted. +#' +#' \item If \code{match.col="ETL"} (also "ES" or "CVaR"), the mean-etl efficient frontier is plotted. +#' } +#' +#' Note that \code{trace=TRUE} must be specified in \code{\link{optimize.portfolio}} +#' +#' GenSA does not return any useable trace information for portfolios tested at +#' each iteration, therfore we cannot extract and chart an efficient frontier. +#' #' @param object optimal portfolio created by \code{\link{optimize.portfolio}} #' @param match.col string name of column to use for risk (horizontal axis). #' \code{match.col} must match the name of an objective in the \code{portfolio} #' object. -#' +#' @param n.portfolios number of portfolios to use to plot the efficient frontier #' @param xlim set the x-axis limit, same as in \code{\link{plot}} #' @param ylim set the y-axis limit, same as in \code{\link{plot}} +#' @param cex.axis +#' @param element.color #' @param main a main title for the plot #' @param ... passthrough parameters to \code{\link{plot}} -chart.Efficient.Frontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ +#' @author Ross Bennett +#' @export +chart.EfficientFrontier <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ + UseMethod("chart.EfficientFrontier") +} + +#' @rdname chart.EfficientFrontier +#' @export +chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class optimize.portfolio.ROI") portf <- object$portfolio R <- object$R + if(is.null(R)) stop(paste("Not able to get asset returns from", object)) wts <- object$weights objectclass <- class(object)[1] @@ -78,28 +108,14 @@ box(col = element.color) } -#' chart the efficient frontier and risk-reward scatter plot of the assets -#' -#' This function charts the efficient frontier and risk-reward scatter plot of the assets. -#' The efficient frontier plotted is based on the the trace information (sets of -#' portfolios tested by the solver at each iteration) in objects created by -#' \code{optimize.portfolio}. When running \code{optimize.portfolio}, -#' \code{trace=TRUE} must be specified. -#' -#' @param object optimal portfolio created by \code{\link{optimize.portfolio}} -#' @param match.col string name of column to use for risk (horizontal axis). -#' \code{match.col} must match the name of an objective in the \code{portfolio} -#' object. -#' @param xlim set the x-axis limit, same as in \code{\link{plot}} -#' @param ylim set the y-axis limit, same as in \code{\link{plot}} -#' @param main a main title for the plot -#' @param ... passthrough parameters to \code{\link{plot}} -chart.Efficient.Frontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ +#' @rdname chart.EfficientFrontier +#' @export +chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ # This function will work with objects of class optimize.portfolio.DEoptim, # optimize.portfolio.random, and optimize.portfolio.pso if(inherits(object, "optimize.portfolio.GenSA")){ - stop("GenSA does not return any useable trace information for portfolios tested, thus we cannot extract an efficient portfolio") + stop("GenSA does not return any useable trace information for portfolios tested, thus we cannot extract an efficient frontier.") } if(!inherits(object, "optimize.portfolio")) stop("object must be of class optimize.portfolio") Deleted: pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.ROI.Rd 2013-08-22 22:46:06 UTC (rev 2859) +++ pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.ROI.Rd 2013-08-23 01:42:10 UTC (rev 2860) @@ -1,42 +0,0 @@ -\name{chart.Efficient.Frontier.optimize.portfolio.ROI} -\alias{chart.Efficient.Frontier.optimize.portfolio.ROI} -\title{chart the efficient frontier and risk-reward scatter plot of the assets} -\usage{ - chart.Efficient.Frontier.optimize.portfolio.ROI(object, - match.col = "ES", n.portfolios = 25, xlim = NULL, - ylim = NULL, cex.axis = 0.8, - element.color = "darkgray", - main = "Efficient Frontier", ...) -} -\arguments{ - \item{object}{optimal portfolio created by - \code{\link{optimize.portfolio}}} - - \item{match.col}{string name of column to use for risk - (horizontal axis). \code{match.col} must match the name - of an objective in the \code{portfolio} object.} - - \item{xlim}{set the x-axis limit, same as in - \code{\link{plot}}} - - \item{ylim}{set the y-axis limit, same as in - \code{\link{plot}}} - - \item{main}{a main title for the plot} - - \item{...}{passthrough parameters to \code{\link{plot}}} -} -\description{ - This function charts the efficient frontier and - risk-reward scatter plot of the assets. The mean-var or - mean-etl efficient frontier can be plotted for optimal - portfolio objects created by \code{optimize.portfolio}. -} -\details{ - If \code{match.col="var"}, the mean-variance efficient - frontier is plotted. - - If \code{match.col="ETL"} (also "ES" or "CVaR"), the - mean-etl efficient frontier is plotted. -} - Deleted: pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.Rd 2013-08-22 22:46:06 UTC (rev 2859) +++ pkg/PortfolioAnalytics/man/chart.Efficient.Frontier.optimize.portfolio.Rd 2013-08-23 01:42:10 UTC (rev 2860) @@ -1,39 +0,0 @@ -\name{chart.Efficient.Frontier.optimize.portfolio} -\alias{chart.Efficient.Frontier.optimize.portfolio} -\title{chart the efficient frontier and risk-reward scatter plot of the assets} -\usage{ - chart.Efficient.Frontier.optimize.portfolio(object, - match.col = "ES", n.portfolios = 25, xlim = NULL, - ylim = NULL, cex.axis = 0.8, - element.color = "darkgray", - main = "Efficient Frontier", ...) -} -\arguments{ - \item{object}{optimal portfolio created by - \code{\link{optimize.portfolio}}} - - \item{match.col}{string name of column to use for risk - (horizontal axis). \code{match.col} must match the name - of an objective in the \code{portfolio} object.} - - \item{xlim}{set the x-axis limit, same as in - \code{\link{plot}}} - - \item{ylim}{set the y-axis limit, same as in - \code{\link{plot}}} - - \item{main}{a main title for the plot} - - \item{...}{passthrough parameters to \code{\link{plot}}} -} -\description{ - This function charts the efficient frontier and - risk-reward scatter plot of the assets. The efficient - frontier plotted is based on the the trace information - (sets of portfolios tested by the solver at each - iteration) in objects created by - \code{optimize.portfolio}. When running - \code{optimize.portfolio}, \code{trace=TRUE} must be - specified. -} - Added: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-23 01:42:10 UTC (rev 2860) @@ -0,0 +1,84 @@ +\name{chart.EfficientFrontier} +\alias{chart.EfficientFrontier} +\alias{chart.EfficientFrontier.optimize.portfolio} +\alias{chart.EfficientFrontier.optimize.portfolio.ROI} +\title{chart the efficient frontier and risk-return scatter plot of the assets} +\usage{ + chart.EfficientFrontier(object, match.col = "ES", + n.portfolios = 25, xlim = NULL, ylim = NULL, + cex.axis = 0.8, element.color = "darkgray", + main = "Efficient Frontier", ...) + + chart.EfficientFrontier.optimize.portfolio.ROI(object, + match.col = "ES", n.portfolios = 25, xlim = NULL, + ylim = NULL, cex.axis = 0.8, + element.color = "darkgray", + main = "Efficient Frontier", ...) + + chart.EfficientFrontier.optimize.portfolio(object, + match.col = "ES", n.portfolios = 25, xlim = NULL, + ylim = NULL, cex.axis = 0.8, + element.color = "darkgray", + main = "Efficient Frontier", ...) +} +\arguments{ + \item{object}{optimal portfolio created by + \code{\link{optimize.portfolio}}} + + \item{match.col}{string name of column to use for risk + (horizontal axis). \code{match.col} must match the name + of an objective in the \code{portfolio} object.} + + \item{n.portfolios}{number of portfolios to use to plot + the efficient frontier} + + \item{xlim}{set the x-axis limit, same as in + \code{\link{plot}}} + + \item{ylim}{set the y-axis limit, same as in + \code{\link{plot}}} + + \item{cex.axis}{} + + \item{element.color}{} + + \item{main}{a main title for the plot} + + \item{...}{passthrough parameters to \code{\link{plot}}} +} +\description{ + This function charts the efficient frontier and + risk-return scatter plot of the assets given an object + created by \code{optimize.portfolio}. +} +\details{ + For objects created by optimize.portfolio with 'DEoptim', + 'random', or 'pso' specified as the optimize_method: + \itemize{ \item The efficient frontier plotted is based + on the the trace information (sets of portfolios tested + by the solver at each iteration) in objects created by + \code{optimize.portfolio}. } + + For objects created by optimize.portfolio with 'ROI' + specified as the optimize_method: \itemize{ \item The + mean-var or mean-etl efficient frontier can be plotted + for optimal portfolio objects created by + \code{optimize.portfolio}. + + \item If \code{match.col="var"}, the mean-variance + efficient frontier is plotted. + + \item If \code{match.col="ETL"} (also "ES" or "CVaR"), + the mean-etl efficient frontier is plotted. } + + Note that \code{trace=TRUE} must be specified in + \code{\link{optimize.portfolio}} + + GenSA does not return any useable trace information for + portfolios tested at each iteration, therfore we cannot + extract and chart an efficient frontier. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Fri Aug 23 04:36:20 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 04:36:20 +0200 (CEST) Subject: [Returnanalytics-commits] r2861 - pkg/PortfolioAnalytics/R Message-ID: <20130823023620.3B4F4185992@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-23 04:36:19 +0200 (Fri, 23 Aug 2013) New Revision: 2861 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/R/extract.efficient.frontier.R Log: Adding n.portfolios as an optional argument to extract.efficient.frontier to generate the sequence Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-23 01:42:10 UTC (rev 2860) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-23 02:36:19 UTC (rev 2861) @@ -150,7 +150,7 @@ rnames <- colnames(R) # get the data of the efficient frontier - frontier <- extract.efficient.frontier(object=object, match.col=match.col) + frontier <- extract.efficient.frontier(object=object, match.col=match.col, n.portfolios=n.portfolios) # data points to plot the frontier x.f <- frontier[, match.col] Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-23 01:42:10 UTC (rev 2860) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-23 02:36:19 UTC (rev 2861) @@ -35,14 +35,14 @@ #' @param portfolio an object of type "portfolio" specifying the constraints and objectives for the optimization, see \code{\link{portfolio.spec}} #' @param optimize_method one of "DEoptim", "random", "ROI", "pso", or "GenSA" #' @export -extract.efficient.frontier <- function (object=NULL, match.col='ES', from=0, to=1, by=0.005, ..., R=NULL, portfolio=NULL, optimize_method='random') +extract.efficient.frontier <- function (object=NULL, match.col='ES', from=NULL, to=NULL, by=0.005, n.portfolios=NULL, ..., R=NULL, portfolio=NULL, optimize_method='random') { #TODO add a threshold argument for how close it has to be to count # do we need to recalc the constrained_objective too? I don't think so. if(!inherits(object, "optimize.portfolio")) stop("object passed in must of of class 'portfolio'") - set<-seq(from=from,to=to,by=by) - set<-cbind(quantmod::Lag(set,1),as.matrix(set))[-1,] + #set<-seq(from=from,to=to,by=by) + #set<-cbind(quantmod::Lag(set,1),as.matrix(set))[-1,] if(is.null(object)){ if(!is.null(R) & !is.null(portfolio)){ portfolios<-optimize.portfolio(portfolio=portfolio, R=R, optimize_method=optimize_method[1], trace=TRUE, ...) @@ -67,7 +67,21 @@ if(is.na(mtc)) { mtc = pmatch(paste(match.col,match.col,sep='.'),columnnames) } + if(is.null(from)){ + from <- min(xtract[, mtc]) + } + if(is.null(to)){ + to <- max(xtract[, mtc]) + } + if(!is.null(n.portfolios)){ + # create the sequence using length.out if the user has specified a value for the n.portfolios arg + set<-seq(from=from, to=to, length.out=n.portfolios) + } else { + # fall back to using by to create the sequence + set<-seq(from=from, to=to, by=by) + } + set<-cbind(quantmod::Lag(set,1),as.matrix(set))[-1,] result <- foreach(i=1:nrow(set),.inorder=TRUE, .combine=rbind, .errorhandling='remove') %do% { tmp<-xtract[which(xtract[,mtc]>=set[i,1] & xtract[,mtc] Author: rossbennett34 Date: 2013-08-23 07:13:24 +0200 (Fri, 23 Aug 2013) New Revision: 2862 Added: pkg/PortfolioAnalytics/man/create.EfficientFrontier.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/extract.efficient.frontier.R pkg/PortfolioAnalytics/man/extract.efficient.frontier.Rd Log: Adding wrapper function to create efficient frontiers. Modifying *.efficient.frontier functions to return an object of class 'efficient.frontier'. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-23 02:36:19 UTC (rev 2861) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-23 05:13:24 UTC (rev 2862) @@ -39,6 +39,7 @@ export(constraint_ROI) export(constraint_v2) export(constraint) +export(create.EfficientFrontier) export(diversification_constraint) export(diversification) export(extract.efficient.frontier) Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-23 02:36:19 UTC (rev 2861) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-23 05:13:24 UTC (rev 2862) @@ -90,7 +90,7 @@ } # combine the stats from the optimal portfolio to result matrix result <- rbind(opt, result) - return(result) + return(structure(result, class="efficient.frontier")) } #' Generate the efficient frontier for a mean-variance portfolio @@ -167,7 +167,7 @@ out[i, ] <- extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) } colnames(out) <- names(stats) - return(out) + return(structure(out, class="efficient.frontier")) } #' Generate the efficient frontier for a mean-etl portfolio @@ -246,6 +246,85 @@ out[i, ] <- extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) } colnames(out) <- names(stats) - return(out) + return(structure(out, class="efficient.frontier")) } +#' create an efficient frontier +#' +#' @details currently there are 4 'types' supported to create an efficient frontier +#' \itemize{ +#' \item{"mean-var":}{ This is a special case for an efficient frontier that +#' can be created by a QP solver. The \code{portfolio} object should have two +#' objectives: 1) mean and 2) var. The efficient frontier will be created via +#' \code{\link{meanvar.efficient.frontier}}.} +#' \item{"mean-etl"}{ This is a special case for an efficient frontier that +#' can be created by an LP solver. The \code{portfolio} object should have two +#' objectives: 1) mean and 2) etl The efficient frontier will be created via +#' \code{\link{meanetl.efficient.frontier}}.} +#' \item{"DEoptim"}{ This can handle more complex constraints and objectives +#' than the simple mean-var and mean-etl cases. For this type, we actually +#' call \code{\link{optimize.portfolio}} with \code{optimize_method="DEoptim"} +#' and then extract the efficient frontier with +#' \code{\link{extract.efficient.frontier}}.} +#' \item{"random"}{ This can handle more complex constraints and objectives +#' than the simple mean-var and mean-etl cases. For this type, we actually +#' call \code{\link{optimize.portfolio}} with \code{optimize_method="random"} +#' and then extract the efficient frontier with +#' \code{\link{extract.efficient.frontier}}.} +#' } +#' +#' @param R xts of asset returns +#' @param portfolio object of class 'portfolio' specifying the constraints and objectives, see \code{\link{portfolio.spec}} +#' @param type type of efficient frontier, see details +#' @param n.portfolios number of portfolios to calculate along the efficient frontier +#' @param match.col column to match when extracting the efficient frontier from an objected created by optimize.portfolio +#' @param seach_size passed to optimize.portfolio for type="DEoptim" or type="random" +#' @param ... passthrough parameters to \code{\link{optimize.portfolio}} +#' @return an object of class 'efficient.frontier' with the objective measures +#' and weights of portfolios along the efficient frontier +#' @author Ross Bennett +#' @seealso \code{\link{optimize.portfolio}}, +#' \code{\link{portfolio.spec}}, +#' \code{\link{meanvar.efficient.frontier}}, +#' \code{\link{meanetl.efficient.frontier}}, +#' \code{\link{extract.efficient.frontier}} +#' @export +create.EfficientFrontier <- function(R, portfolio, type=c("mean-var", "mean-etl", "random", "DEoptim"), n.portfolios=25, match.col="ES", search_size=2000, ...){ + # This is just a wrapper around a few functions to easily create efficient frontiers + # given a portfolio object and other parameters + + if(!is.portfolio(portfolio)) stop("portfolio must be of class 'portfolio'") + type <- type[1] + switch(type, + "mean-var" = {frontier <- meanvar.efficient.frontier(portfolio=portfolio, + R=R, + n.portfolios=n.portfolios) + }, + "mean-etl" = {frontier <- meanetl.efficient.frontier(portfolio=portfolio, + R=R, + n.portfolios=n.portfolios) + }, + "random" = {tmp <- optimize.portfolio(R=R, + portfolio=portfolio, + optimize_method=type, + trace=TRUE, + search_size=search_size, + ...=...) + frontier <- extract.efficient.frontier(object=tmp, + match.col=match.col, + n.portfolios=n.portfolios) + }, + "DEoptim" = {tmp <- optimize.portfolio(R=R, + portfolio=portfolio, + optimize_method=type, + trace=TRUE, + search_size=search_size, + ...=...) + frontier <- extract.efficient.frontier(object=tmp, + match.col=match.col, + n.portfolios=n.portfolios) + } + ) + return(frontier) +} + Added: pkg/PortfolioAnalytics/man/create.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/create.EfficientFrontier.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/create.EfficientFrontier.Rd 2013-08-23 05:13:24 UTC (rev 2862) @@ -0,0 +1,78 @@ +\name{create.EfficientFrontier} +\alias{create.EfficientFrontier} +\title{create an efficient frontier} +\usage{ + create.EfficientFrontier(R, portfolio, + type = c("mean-var", "mean-etl", "random", "DEoptim"), + n.portfolios = 25, match.col = "ES", + search_size = 2000, ...) +} +\arguments{ + \item{R}{xts of asset returns} + + \item{portfolio}{object of class 'portfolio' specifying + the constraints and objectives, see + \code{\link{portfolio.spec}}} + + \item{type}{type of efficient frontier, see details} + + \item{n.portfolios}{number of portfolios to calculate + along the efficient frontier} + + \item{match.col}{column to match when extracting the + efficient frontier from an objected created by + optimize.portfolio} + + \item{seach_size}{passed to optimize.portfolio for + type="DEoptim" or type="random"} + + \item{...}{passthrough parameters to + \code{\link{optimize.portfolio}}} +} +\value{ + an object of class 'efficient.frontier' with the + objective measures and weights of portfolios along the + efficient frontier +} +\description{ + create an efficient frontier +} +\details{ + currently there are 4 'types' supported to create an + efficient frontier \itemize{ \item{"mean-var":}{ This is + a special case for an efficient frontier that can be + created by a QP solver. The \code{portfolio} object + should have two objectives: 1) mean and 2) var. The + efficient frontier will be created via + \code{\link{meanvar.efficient.frontier}}.} + \item{"mean-etl"}{ This is a special case for an + efficient frontier that can be created by an LP solver. + The \code{portfolio} object should have two objectives: + 1) mean and 2) etl The efficient frontier will be created + via \code{\link{meanetl.efficient.frontier}}.} + \item{"DEoptim"}{ This can handle more complex + constraints and objectives than the simple mean-var and + mean-etl cases. For this type, we actually call + \code{\link{optimize.portfolio}} with + \code{optimize_method="DEoptim"} and then extract the + efficient frontier with + \code{\link{extract.efficient.frontier}}.} + \item{"random"}{ This can handle more complex constraints + and objectives than the simple mean-var and mean-etl + cases. For this type, we actually call + \code{\link{optimize.portfolio}} with + \code{optimize_method="random"} and then extract the + efficient frontier with + \code{\link{extract.efficient.frontier}}.} } +} +\author{ + Ross Bennett +} +\seealso{ + \code{\link{optimize.portfolio}}, + \code{\link{portfolio.spec}}, + \code{\link{meanvar.efficient.frontier}}, + \code{\link{meanetl.efficient.frontier}}, + \code{\link{extract.efficient.frontier}} +} + Modified: pkg/PortfolioAnalytics/man/extract.efficient.frontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/extract.efficient.frontier.Rd 2013-08-23 02:36:19 UTC (rev 2861) +++ pkg/PortfolioAnalytics/man/extract.efficient.frontier.Rd 2013-08-23 05:13:24 UTC (rev 2862) @@ -3,8 +3,9 @@ \title{Extract the efficient frontier of portfolios that meet your objectives over a range of risks} \usage{ extract.efficient.frontier(object = NULL, - match.col = "ES", from = 0, to = 1, by = 0.005, ..., - R = NULL, portfolio = NULL, optimize_method = "random") + match.col = "ES", from = NULL, to = NULL, by = 0.005, + n.portfolios = NULL, ..., R = NULL, portfolio = NULL, + optimize_method = "random") } \arguments{ \item{object}{optimial portfolio object as created by From noreply at r-forge.r-project.org Fri Aug 23 20:13:55 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 20:13:55 +0200 (CEST) Subject: [Returnanalytics-commits] r2863 - pkg/PerformanceAnalytics/sandbox/pulkit/R Message-ID: <20130823181355.4779318092E@r-forge.r-project.org> Author: pulkit Date: 2013-08-23 20:13:55 +0200 (Fri, 23 Aug 2013) New Revision: 2863 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R Log: changes in gpd Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-23 05:13:24 UTC (rev 2862) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-23 18:13:55 UTC (rev 2863) @@ -21,32 +21,40 @@ #' #'\deqn{G(m) = 1- e^{-frac{m^\gamma}{\psi}}} #' -#'The unit exponential distribution is given by the following equation \eqn{MGPD(1,0,\psi)} +#'In this function weibull and generalized Pareto distribution has been covered. This function can be +#'expanded in the future to include more Extreme Value distributions as the literature on such distribution +#'matures in the future. #' -#'\deqn{G(m) = 1- e^{-m}} -#' -#' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset return #' @param type The type of distribution "gpd","pd","weibull" #' @param threshold The threshold beyond which the drawdowns have to be modelled +#' +#' +#'@examples +#' +#'DrawdownGPD(edhec[,1],"gpd",0.95) +#' +#'DrawdownGPD(edhec[,1],"weibull") #' #'@references -#'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). Coppead Working Paper Series No. 359. -#'Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. +#'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). +#'Coppead Working Paper Series No. 359.Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. #' -DrawdownGPD<-function(R,type=c("gpd","pd","weibull"),threshold=0.90){ +DrawdownGPD<-function(R,type=c("gpd","weibull"),threshold=0.90){ x = checkData(R) columns = ncol(R) columnnames = colnames(R) type = type[1] dr = -Drawdowns(R) - data = sort(as.vector(dr)) - threshold = data[threshold*nrow(R)] - if(type=="gpd"){ - gpd_fit = gpd(data,threshold) - return(gpd_fit) - } - if(type=="wiebull"){ + + + gpdfit<-function(data,threshold){ + if(type=="gpd"){ + gpd_fit = gpd(data,threshold) + result = list(shape = gpd_fit$param[2],scale = gpd_fit$param[1]) + return(result) + } + if(type=="wiebull"){ # From package MASS if(any( data<= 0)) stop("Weibull values must be > 0") lx <- log(data) @@ -54,7 +62,28 @@ shape <- 1.2/sqrt(v); scale <- exp(m + 0.572/shape) result <- list(shape = shape, scale = scale) return(result) - } + } + } + for(column in 1:columns){ + data = sort(as.vector(dr[,column])) + threshold = data[threshold*nrow(R)] + column.parameters <- gpdfit(data,threshold) + if(column == 1){ + shape = column.parameters$shape + scale = column.parameters$scale + } + else { + scale = merge(scale, column.parameters$scale) + shape = merge(shape, column.parameters$shape) + print(scale) + print(shape) + } + } + parameters = rbind(scale,shape) + colnames(parameters) = columnnames + parameters = reclass(parameters, x) + rownames(parameters)=c("scale","shape") + return(parameters) } From noreply at r-forge.r-project.org Fri Aug 23 20:32:05 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 20:32:05 +0200 (CEST) Subject: [Returnanalytics-commits] r2864 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130823183205.1AC9C18092E@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-23 20:32:04 +0200 (Fri, 23 Aug 2013) New Revision: 2864 Added: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/R/extract.efficient.frontier.R Log: Adding function to chart the weights along the efficient frontier. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-23 18:13:55 UTC (rev 2863) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-23 18:32:04 UTC (rev 2864) @@ -18,6 +18,7 @@ export(chart.Scatter.ROI) export(chart.Scatter.RP) export(chart.Weights.DE) +export(chart.Weights.EF) export(chart.Weights.GenSA) export(chart.Weights.optimize.portfolio.DEoptim) export(chart.Weights.optimize.portfolio.GenSA) Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-23 18:13:55 UTC (rev 2863) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-23 18:32:04 UTC (rev 2864) @@ -176,3 +176,62 @@ axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) } + +#' chart the weights along the efficient frontier +#' +#' This creates a stacked column chart of the weights of portfolios along the efficient frontier. +#' +#' @param object object of class 'efficient.frontier' created by \code{\link{create.EfficientFrontier}}. +#' @param colorset color palette to use. +#' @param ... passthrough parameters to \code{chart.StackedBar}. +#' @param match.col match.col string name of column to use for risk (horizontal axis). +#' Must match the name of an objective. +#' @param main main title used in the plot. +#' @param las sets the orientation of the axis labels, as described in \code{\link{par}}. +#' @param cex.lab The magnification to be used for x- and y-axis labels relative to the current setting of 'cex'. +#' @param cex.axis The magnification to be used for sizing the axis text relative to the current setting of 'cex', similar to \code{\link{plot}}. +#' @param cex.legend The magnification to be used for sizing the legend relative to the current setting of 'cex', similar to \code{\link{plot}}. +#' @param legend.loc places a legend into a location on the chart similar to \code{\link{chart.TimeSeries}}. The default, "under," is the only location currently implemented for this chart. Use 'NULL' to remove the legend. +#' @param legend.labels character vector to use for the legend labels +#' @param element.color provides the color for drawing less-important chart elements, such as the box lines, axis lines, etc. +#' @author Ross Bennett +#' @export +chart.Weights.EF <- function(object, colorset=NULL, ..., match.col="ES", main="EF Weights", las=1, cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.loc="under", legend.labels=NULL, element.color="darkgray"){ + if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") + + # get the columns with weights + cnames <- colnames(object) + wts_idx <- grep(pattern="^w\\.", cnames) + wts <- object[, wts_idx] + + if(!is.null(legend.labels)){ + # use legend.labels passed in by user + colnames(wts) <- legend.labels + } else { + # remove w. from the column names + colnames(wts) <- gsub(pattern="^w\\.", replacement="", cnames[wts_idx]) + } + + # get the "mean" column + mean.mtc <- pmatch("mean", cnames) + if(is.na(mean.mtc)) { + mean.mtc <- pmatch("mean.mean", cnames) + } + if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") + + # get the match.col column + mtc <- pmatch(match.col, cnames) + if(is.na(mtc)) { + mtc <- pmatch(paste(match.col,match.col,sep='.'),cnames) + } + if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + + # plot the 'match.col' (risk) as the x-axis labels + xlabels <- round(object[, mtc], 4) + + chart.StackedBar(w=wts, colorset=colorset, ..., main=main, las=las, space=0, + cex.lab=cex.lab, cex.axis=cex.axis, cex.legend=cex.legend, + legend.loc=legend.loc, element.color=element.color, + xaxis.labels=xlabels, xlab=match.col, ylab="Weights") +} + Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-23 18:13:55 UTC (rev 2863) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-23 18:32:04 UTC (rev 2864) @@ -67,6 +67,8 @@ if(is.na(mtc)) { mtc = pmatch(paste(match.col,match.col,sep='.'),columnnames) } + if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + if(is.null(from)){ from <- min(xtract[, mtc]) } Added: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-23 18:32:04 UTC (rev 2864) @@ -0,0 +1,60 @@ +\name{chart.Weights.EF} +\alias{chart.Weights.EF} +\title{chart the weights along the efficient frontier} +\usage{ + chart.Weights.EF(object, colorset = NULL, ..., + match.col = "ES", main = "EF Weights", las = 1, + cex.lab = 0.8, cex.axis = 0.8, cex.legend = 0.8, + legend.loc = "under", legend.labels = NULL, + element.color = "darkgray") +} +\arguments{ + \item{object}{object of class 'efficient.frontier' + created by \code{\link{create.EfficientFrontier}}.} + + \item{colorset}{color palette to use.} + + \item{...}{passthrough parameters to + \code{chart.StackedBar}.} + + \item{match.col}{match.col string name of column to use + for risk (horizontal axis). Must match the name of an + objective.} + + \item{main}{main title used in the plot.} + + \item{las}{sets the orientation of the axis labels, as + described in \code{\link{par}}.} + + \item{cex.lab}{The magnification to be used for x- and + y-axis labels relative to the current setting of 'cex'.} + + \item{cex.axis}{The magnification to be used for sizing + the axis text relative to the current setting of 'cex', + similar to \code{\link{plot}}.} + + \item{cex.legend}{The magnification to be used for sizing + the legend relative to the current setting of 'cex', + similar to \code{\link{plot}}.} + + \item{legend.loc}{places a legend into a location on the + chart similar to \code{\link{chart.TimeSeries}}. The + default, "under," is the only location currently + implemented for this chart. Use 'NULL' to remove the + legend.} + + \item{legend.labels}{character vector to use for the + legend labels} + + \item{element.color}{provides the color for drawing + less-important chart elements, such as the box lines, + axis lines, etc.} +} +\description{ + This creates a stacked column chart of the weights of + portfolios along the efficient frontier. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Fri Aug 23 21:33:57 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 21:33:57 +0200 (CEST) Subject: [Returnanalytics-commits] r2865 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130823193357.177D5185B7D@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-23 21:33:56 +0200 (Fri, 23 Aug 2013) New Revision: 2865 Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/R/extract.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd Log: Modifying the create.EfficientFrontier function to return the asset returns and set the class. Adding function to chart an efficient frontier of an objected created by create.EfficientFrontier. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-23 18:32:04 UTC (rev 2864) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-23 19:33:56 UTC (rev 2865) @@ -3,6 +3,7 @@ export(applyFUN) export(box_constraint) export(CCCgarch.MM) +export(chart.EfficientFrontier.efficient.frontier) export(chart.EfficientFrontier.optimize.portfolio.ROI) export(chart.EfficientFrontier.optimize.portfolio) export(chart.EfficientFrontier) Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-23 18:32:04 UTC (rev 2864) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-23 19:33:56 UTC (rev 2865) @@ -198,11 +198,11 @@ #' @export chart.Weights.EF <- function(object, colorset=NULL, ..., match.col="ES", main="EF Weights", las=1, cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.loc="under", legend.labels=NULL, element.color="darkgray"){ if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") - + frontier <- object$frontier # get the columns with weights - cnames <- colnames(object) + cnames <- colnames(frontier) wts_idx <- grep(pattern="^w\\.", cnames) - wts <- object[, wts_idx] + wts <- frontier[, wts_idx] if(!is.null(legend.labels)){ # use legend.labels passed in by user @@ -227,7 +227,7 @@ if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") # plot the 'match.col' (risk) as the x-axis labels - xlabels <- round(object[, mtc], 4) + xlabels <- round(frontier[, mtc], 4) chart.StackedBar(w=wts, colorset=colorset, ..., main=main, las=las, space=0, cex.lab=cex.lab, cex.axis=cex.axis, cex.legend=cex.legend, @@ -235,3 +235,56 @@ xaxis.labels=xlabels, xlab=match.col, ylab="Weights") } +#' @rdname chart.EfficientFrontier +#' @export +chart.EfficientFrontier.efficient.frontier <- function(object, chart.assets=TRUE, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ + if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") + + # get the returns and efficient frontier object + R <- object$R + frontier <- object$frontier + + # get the column names from the frontier object + cnames <- colnames(frontier) + + # get the "mean" column + mean.mtc <- pmatch("mean", cnames) + if(is.na(mean.mtc)) { + mean.mtc <- pmatch("mean.mean", cnames) + } + if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") + + # get the match.col column + mtc <- pmatch(match.col, cnames) + if(is.na(mtc)) { + mtc <- pmatch(paste(match.col,match.col,sep='.'),cnames) + } + if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + + if(chart.assets){ + # get the data to plot scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN="mean") + asset_risk <- scatterFUN(R=R, FUN=match.col) + rnames <- colnames(R) + + # set the x and y limits + if(is.null(xlim)){ + xlim <- range(c(frontier[, mtc], asset_risk)) + } + if(is.null(ylim)){ + ylim <- range(c(frontier[, mean.mtc], asset_ret)) + } + } + + # plot the efficient frontier line + plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, pch=5, axes=FALSE, ...) + if(chart.assets){ + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret) + text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + } + axis(1, cex.axis = cex.axis, col = element.color) + axis(2, cex.axis = cex.axis, col = element.color) + box(col = element.color) +} + Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-23 18:32:04 UTC (rev 2864) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-23 19:33:56 UTC (rev 2865) @@ -327,6 +327,7 @@ n.portfolios=n.portfolios) } ) - return(frontier) + return(structure(list(frontier=frontier, + R=R), class="efficient.frontier")) } Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-23 18:32:04 UTC (rev 2864) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-23 19:33:56 UTC (rev 2865) @@ -1,5 +1,6 @@ \name{chart.EfficientFrontier} \alias{chart.EfficientFrontier} +\alias{chart.EfficientFrontier.efficient.frontier} \alias{chart.EfficientFrontier.optimize.portfolio} \alias{chart.EfficientFrontier.optimize.portfolio.ROI} \title{chart the efficient frontier and risk-return scatter plot of the assets} @@ -20,6 +21,12 @@ ylim = NULL, cex.axis = 0.8, element.color = "darkgray", main = "Efficient Frontier", ...) + + chart.EfficientFrontier.efficient.frontier(object, + chart.assets = TRUE, match.col = "ES", + n.portfolios = NULL, xlim = NULL, ylim = NULL, + cex.axis = 0.8, element.color = "darkgray", + main = "Efficient Frontier", ...) } \arguments{ \item{object}{optimal portfolio created by From noreply at r-forge.r-project.org Fri Aug 23 21:42:12 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 21:42:12 +0200 (CEST) Subject: [Returnanalytics-commits] r2866 - pkg/PerformanceAnalytics/sandbox/Shubhankit Message-ID: <20130823194212.2EF7A18502B@r-forge.r-project.org> Author: peter_carl Date: 2013-08-23 21:42:11 +0200 (Fri, 23 Aug 2013) New Revision: 2866 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION Log: - added some missing functions Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-23 19:33:56 UTC (rev 2865) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-23 19:42:11 UTC (rev 2866) @@ -30,3 +30,8 @@ 'table.normDD.R' 'table.UnsmoothReturn.R' 'UnsmoothReturn.R' + 'AcarSim.R' + 'CDD.Opt.R' + 'CalmarRatio.Norm.R' + 'SterlingRatio.Norm.R' + From noreply at r-forge.r-project.org Fri Aug 23 21:42:36 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 23 Aug 2013 21:42:36 +0200 (CEST) Subject: [Returnanalytics-commits] r2867 - pkg/PerformanceAnalytics/sandbox/pulkit Message-ID: <20130823194236.0D24718502B@r-forge.r-project.org> Author: peter_carl Date: 2013-08-23 21:42:35 +0200 (Fri, 23 Aug 2013) New Revision: 2867 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION Log: - added some missing functions Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-23 19:42:11 UTC (rev 2866) +++ pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-23 19:42:35 UTC (rev 2867) @@ -31,6 +31,7 @@ 'EDDCOPS.R' 'Edd.R' 'ExtremeDrawdown.R' + 'gpdmle.R' 'GoldenSection.R' 'MaxDD.R' 'MinTRL.R' From noreply at r-forge.r-project.org Sat Aug 24 01:37:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 01:37:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2868 - in pkg/FactorAnalytics: R man Message-ID: <20130823233710.F0E461859D7@r-forge.r-project.org> Author: chenyian Date: 2013-08-24 01:37:10 +0200 (Sat, 24 Aug 2013) New Revision: 2868 Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/R/factorModelSdDecomposition.R pkg/FactorAnalytics/R/factorModelVaRDecomposition.R pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd Log: clean up variable names. Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-23 19:42:35 UTC (rev 2867) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-23 23:37:10 UTC (rev 2868) @@ -32,9 +32,9 @@ #' \item{n.exceed} Scalar, number of observations beyond VaR. #' \item{idx.exceed} \code{n.exceed x 1} vector giving index values of exceedences. #' \item{ES scalar} nonparametric ES value for fund reported as a positive number. -#' \item{mcES} \code{(K+1) x 1} vector of factor marginal contributions to ES. +#' \item{mES} \code{(K+1) x 1} vector of factor marginal contributions to ES. #' \item{cES} \code{(K+1) x 1} vector of factor component contributions to ES. -#' \item{pcES} \code{(K+1) x 1} vector of factor percent contributions to ES. +#' \item{pcES} \code{(K+1) x 1} vector of factor percentage component contributions to ES. #' } #' @author Eric Zviot and Yi-An Chen. #' @references 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A @@ -123,7 +123,7 @@ n.exceed = length(idx), idx.exceed = idx, ES = ES.fm, - mcES = t(mcES.fm), + mES = t(mcES.fm), cES = t(cES.fm), pcES = t(pcES.fm)) return(ans) Modified: pkg/FactorAnalytics/R/factorModelSdDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelSdDecomposition.R 2013-08-23 19:42:35 UTC (rev 2867) +++ pkg/FactorAnalytics/R/factorModelSdDecomposition.R 2013-08-23 23:37:10 UTC (rev 2868) @@ -9,10 +9,10 @@ #' @param sig2.e scalar, residual variance from factor model. #' @return an S3 object containing #' \itemize{ -#' \item{sd.fm} Scalar, std dev based on factor model. -#' \item{mcr.fm} (K+1) x 1 vector of factor marginal contributions to risk sd. -#' \item{cr.fm} (K+1) x 1 vector of factor component contributions to risk sd. -#' \item{pcr.fm} (K+1) x 1 vector of factor percent contributions to risk sd. +#' \item{Sd.fm} Scalar, std dev based on factor model. +#' \item{mSd.fm} (K+1) x 1 vector of factor marginal contributions to risk sd. +#' \item{cSd.fm} (K+1) x 1 vector of factor component contributions to risk sd. +#' \item{pcSd.fm} (K+1) x 1 vector of factor percentage component contributions to risk sd. #' } #' @author Eric Zivot and Yi-An Chen #' @examples @@ -67,10 +67,10 @@ colnames(cr.fm) = "CR" colnames(pcr.fm) = "PCR" ## return results - ans = list(sd.fm = sd.fm, - mcr.fm = t(mcr.fm), - cr.fm = t(cr.fm), - pcr.fm = t(pcr.fm)) + ans = list(Sd.fm = sd.fm, + mSd.fm = t(mcr.fm), + cSd.fm = t(cr.fm), + pcSd.fm = t(pcr.fm)) return(ans) } Modified: pkg/FactorAnalytics/R/factorModelVaRDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-23 19:42:35 UTC (rev 2867) +++ pkg/FactorAnalytics/R/factorModelVaRDecomposition.R 2013-08-23 23:37:10 UTC (rev 2868) @@ -29,7 +29,7 @@ #' exceedences. #' \item{mVaR.fm} (K+1) x 1 vector of factor marginal contributions to VaR. #' \item{cVaR.fm} (K+1) x 1 vector of factor component contributions to VaR. -#' \item{pcVaR.fm} (K+1) x 1 vector of factor percent contributions to VaR. +#' \item{pcVaR.fm} (K+1) x 1 vector of factor percentage contributions to VaR. #' } #' @author Eric Zivot and Yi-An Chen #' @references 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-23 19:42:35 UTC (rev 2867) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-23 23:37:10 UTC (rev 2868) @@ -35,11 +35,12 @@ number of observations beyond VaR. \item{idx.exceed} \code{n.exceed x 1} vector giving index values of exceedences. \item{ES scalar} nonparametric ES value for - fund reported as a positive number. \item{mcES} + fund reported as a positive number. \item{mES} \code{(K+1) x 1} vector of factor marginal contributions to ES. \item{cES} \code{(K+1) x 1} vector of factor component contributions to ES. \item{pcES} \code{(K+1) x - 1} vector of factor percent contributions to ES. } + 1} vector of factor percentage component contributions to + ES. } } \description{ Compute the factor model factor expected shortfall (ES) Modified: pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd 2013-08-23 19:42:35 UTC (rev 2867) +++ pkg/FactorAnalytics/man/factorModelSdDecomposition.Rd 2013-08-23 23:37:10 UTC (rev 2868) @@ -15,12 +15,13 @@ model.} } \value{ - an S3 object containing \itemize{ \item{sd.fm} Scalar, - std dev based on factor model. \item{mcr.fm} (K+1) x 1 + an S3 object containing \itemize{ \item{Sd.fm} Scalar, + std dev based on factor model. \item{mSd.fm} (K+1) x 1 vector of factor marginal contributions to risk sd. - \item{cr.fm} (K+1) x 1 vector of factor component - contributions to risk sd. \item{pcr.fm} (K+1) x 1 vector - of factor percent contributions to risk sd. } + \item{cSd.fm} (K+1) x 1 vector of factor component + contributions to risk sd. \item{pcSd.fm} (K+1) x 1 vector + of factor percentage component contributions to risk sd. + } } \description{ Compute factor model factor risk (sd) decomposition for Modified: pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-23 19:42:35 UTC (rev 2867) +++ pkg/FactorAnalytics/man/factorModelVaRDecomposition.Rd 2013-08-23 23:37:10 UTC (rev 2868) @@ -34,7 +34,7 @@ vector of factor marginal contributions to VaR. \item{cVaR.fm} (K+1) x 1 vector of factor component contributions to VaR. \item{pcVaR.fm} (K+1) x 1 vector of - factor percent contributions to VaR. } + factor percentage contributions to VaR. } } \description{ Compute factor model factor VaR decomposition based on From noreply at r-forge.r-project.org Sat Aug 24 01:48:30 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 01:48:30 +0200 (CEST) Subject: [Returnanalytics-commits] r2869 - in pkg/FactorAnalytics: R man Message-ID: <20130823234830.267E1185932@r-forge.r-project.org> Author: chenyian Date: 2013-08-24 01:48:29 +0200 (Sat, 24 Aug 2013) New Revision: 2869 Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd Log: clean up variables. Modified: pkg/FactorAnalytics/R/factorModelEsDecomposition.R =================================================================== --- pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-23 23:37:10 UTC (rev 2868) +++ pkg/FactorAnalytics/R/factorModelEsDecomposition.R 2013-08-23 23:48:29 UTC (rev 2869) @@ -31,10 +31,10 @@ #' positive number.} #' \item{n.exceed} Scalar, number of observations beyond VaR. #' \item{idx.exceed} \code{n.exceed x 1} vector giving index values of exceedences. -#' \item{ES scalar} nonparametric ES value for fund reported as a positive number. -#' \item{mES} \code{(K+1) x 1} vector of factor marginal contributions to ES. -#' \item{cES} \code{(K+1) x 1} vector of factor component contributions to ES. -#' \item{pcES} \code{(K+1) x 1} vector of factor percentage component contributions to ES. +#' \item{ES.fm} Scalar. nonparametric ES value for fund reported as a positive number. +#' \item{mES.fm} \code{(K+1) x 1} vector of factor marginal contributions to ES. +#' \item{cES.fm} \code{(K+1) x 1} vector of factor component contributions to ES. +#' \item{pcES.fm} \code{(K+1) x 1} vector of factor percentage component contributions to ES. #' } #' @author Eric Zviot and Yi-An Chen. #' @references 1. Hallerback (2003), "Decomposing Portfolio Value-at-Risk: A @@ -119,13 +119,13 @@ colnames(mcES.fm) = "MCES" colnames(cES.fm) = "CES" colnames(pcES.fm) = "PCES" -ans = list(VaR = -VaR.fm, +ans = list(VaR.fm = -VaR.fm, n.exceed = length(idx), idx.exceed = idx, - ES = ES.fm, - mES = t(mcES.fm), - cES = t(cES.fm), - pcES = t(pcES.fm)) + ES.fm = ES.fm, + mES.fm = t(mcES.fm), + cES.fm = t(cES.fm), + pcES.fm = t(pcES.fm)) return(ans) } Modified: pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-23 23:37:10 UTC (rev 2868) +++ pkg/FactorAnalytics/man/factorModelEsDecomposition.Rd 2013-08-23 23:48:29 UTC (rev 2869) @@ -34,13 +34,13 @@ reported as a positive number.} \item{n.exceed} Scalar, number of observations beyond VaR. \item{idx.exceed} \code{n.exceed x 1} vector giving index values of - exceedences. \item{ES scalar} nonparametric ES value for - fund reported as a positive number. \item{mES} + exceedences. \item{ES.fm} Scalar. nonparametric ES value + for fund reported as a positive number. \item{mES.fm} \code{(K+1) x 1} vector of factor marginal contributions - to ES. \item{cES} \code{(K+1) x 1} vector of factor - component contributions to ES. \item{pcES} \code{(K+1) x - 1} vector of factor percentage component contributions to - ES. } + to ES. \item{cES.fm} \code{(K+1) x 1} vector of factor + component contributions to ES. \item{pcES.fm} \code{(K+1) + x 1} vector of factor percentage component contributions + to ES. } } \description{ Compute the factor model factor expected shortfall (ES) From noreply at r-forge.r-project.org Sat Aug 24 02:07:51 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 02:07:51 +0200 (CEST) Subject: [Returnanalytics-commits] r2870 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130824000751.E4F49185932@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-24 02:07:51 +0200 (Sat, 24 Aug 2013) New Revision: 2870 Added: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R Log: Adding test script of efficient frontier features. Added: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-24 00:07:51 UTC (rev 2870) @@ -0,0 +1,137 @@ +# Script to test efficient frontiers + +# Efficient frontiers can be plotted two ways +# 1. Run optimize.portfolio with trace=TRUE and then chart that object +# 2. create an efficient frontier and then chart that object + +library(PortfolioAnalytics) +library(DEoptim) +library(ROI) +require(ROI.plugin.quadprog) +require(ROI.plugin.glpk) + +rm(list=ls()) + +data(edhec) +R <- edhec[, 1:5] +funds <- colnames(R) + +# initial portfolio object +init <- portfolio.spec(assets=funds) +# initial constraints +init <- add.constraint(portfolio=init, type="full_investment") +init <- add.constraint(portfolio=init, type="box", min=0, max=1) + +# initial objective +init <- add.objective(portfolio=init, type="return", name="mean") + +# create mean-etl portfolio +meanetl.portf <- add.objective(portfolio=init, type="risk", name="ES") + +# create mean-var portfolio +meanvar.portf <- add.objective(portfolio=init, type="risk", name="var", risk_aversion=1e6) + +# create efficient frontiers + +# mean-var efficient frontier +meanvar.ef <- create.EfficientFrontier(R=R, portfolio=meanvar.portf, type="mean-var") +chart.EfficientFrontier(meanvar.ef, match.col="var", type="b") +chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="var") + +# run optimize.portfolio and chart the efficient frontier for that object +opt_meanvar <- optimize.portfolio(R=R, portfolio=meanvar.portf, optimize_method="ROI", trace=TRUE) +chart.EfficientFrontier(opt_meanvar, match.col="var", n.portfolios=50) + +# mean-etl efficient frontier +meanetl.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="mean-etl") +chart.EfficientFrontier(meanetl.ef, match.col="ES", main="mean-ETL Efficient Frontier", type="b", col="blue") +chart.Weights.EF(meanetl.ef, colorset=bluemono, match.col="ES") + +# mean-etl efficient frontier using random portfolios +meanetl.rp.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="random", match.col="ES") +chart.EfficientFrontier(meanetl.rp.ef, match.col="ES", main="mean-ETL RP Efficient Frontier", type="l", col="blue") +chart.Weights.EF(meanetl.rp.ef, colorset=bluemono, match.col="ES") + + +##### overlay efficient frontiers of multiple portfolios ##### +# Create a mean-var efficient frontier for multiple portfolios and overlay the efficient frontier lines +# set up an initial portfolio with the full investment constraint and mean and var objectives +init.portf <- portfolio.spec(assets=funds) +init.portf <- add.constraint(portfolio=init.portf, type="full_investment") +init.portf <- add.objective(portfolio=init.portf, type="risk", name="var") +init.portf <- add.objective(portfolio=init.portf, type="return", name="mean") + +# long only constraints +lo.portf <- add.constraint(portfolio=init.portf, type="long_only") + +# box constraints +box.portf <- add.constraint(portfolio=init.portf, type="box", min=0.05, max=0.65) + +# group constraints (also add long only constraints to the group portfolio) +group.portf <- add.constraint(portfolio=init.portf, type="group", + groups=c(2, 3), + group_min=c(0.25, 0.15), + group_max=c(0.75, 0.55)) +group.portf <- add.constraint(portfolio=group.portf, type="long_only") +# optimize.portfolio(R=R, portfolio=group.portf, optimize_method="ROI") + +foo <- function(R, portfolio_list, type, match.col="ES", main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ...){ + + # create multiple efficient frontier objects (one per portfolio in portfolio_list) + # store in out + out <- list() + for(i in 1:length(portfolio_list)){ + if(!is.portfolio(portfolio_list[[i]])) stop("portfolio in portfolio_list must be of class 'portfolio'") + out[[i]] <- create.EfficientFrontier(R=R, portfolio=portfolio_list[[i]], type=type) + } + # get the data to plot scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN="mean") + asset_risk <- scatterFUN(R=R, FUN=match.col) + rnames <- colnames(R) + # plot the assets + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + axis(1, cex.axis = cex.axis, col = element.color) + axis(2, cex.axis = cex.axis, col = element.color) + box(col = element.color) + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret) + text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + + for(i in 1:length(out)){ + tmp <- out[[i]] + tmpfrontier <- tmp$frontier + cnames <- colnames(tmpfrontier) + + # get the "mean" column + mean.mtc <- pmatch("mean", cnames) + if(is.na(mean.mtc)) { + mean.mtc <- pmatch("mean.mean", cnames) + } + if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") + + # get the match.col column + mtc <- pmatch(match.col, cnames) + if(is.na(mtc)) { + mtc <- pmatch(paste(match.col, match.col, sep='.'),cnames) + } + if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + lines(x=tmpfrontier[, mtc], y=tmpfrontier[, mean.mtc], col=i, lty=i, lwd=2) + } + if(!is.null(legend.loc)){ + if(is.null(legend.labels)){ + legend.labels <- paste("Portfolio", 1:length(out), sep=".") + } + legend(legend.loc, legend=legend.labels, col=1:length(out), lty=1:length(out), lwd=2, cex=cex.legend, bty="n") + } + return(invisible(out)) +} + +portf.list <- list(lo.portf, box.portf, group.portf) +legend.labels <- c("Long Only", "Box", "Group + Long Only") +foo(R=R, portfolio_list=portf.list, type="mean-var", match.col="var", + ylim=c(0.0055, 0.0085), xlim=c(0, 0.0025), legend.loc="bottomright", + legend.labels=legend.labels) + +# TODO add the following methods for objects of class efficient.frontier +# - print +# - summary From noreply at r-forge.r-project.org Sat Aug 24 11:24:24 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 11:24:24 +0200 (CEST) Subject: [Returnanalytics-commits] r2871 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . R man Message-ID: <20130824092424.134BB1851C5@r-forge.r-project.org> Author: shubhanm Date: 2013-08-24 11:24:23 +0200 (Sat, 24 Aug 2013) New Revision: 2871 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd Log: .Rd details added Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-24 00:07:51 UTC (rev 2870) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-24 09:24:23 UTC (rev 2871) @@ -1,37 +1,38 @@ -Package: noniid.sm -Type: Package -Title: Non-i.i.d. GSoC 2013 Shubhankit -Version: 0.1 -Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ -Author: Shubhankit Mohan -Contributors: Peter Carl, Brian G. Peterson -Depends: - xts, - PerformanceAnalytics -Suggests: - PortfolioAnalytics -Maintainer: Brian G. Peterson -Description: GSoC 2013 project to replicate literature on drawdowns and - non-i.i.d assumptions in finance. -License: GPL-3 -ByteCompile: TRUE -Collate: - 'ACStdDev.annualized.R' - 'CalmarRatio.Normalized.R' - 'CDDopt.R' - 'CDrawdown.R' - 'chart.Autocorrelation.R' - 'EmaxDDGBM.R' - 'GLMSmoothIndex.R' - 'maxDDGBM.R' - 'na.skip.R' - 'Return.GLM.R' - 'table.ComparitiveReturn.GLM.R' - 'table.normDD.R' - 'table.UnsmoothReturn.R' - 'UnsmoothReturn.R' - 'AcarSim.R' - 'CDD.Opt.R' - 'CalmarRatio.Norm.R' - 'SterlingRatio.Norm.R' - +Package: noniid.sm +Type: Package +Title: Non-i.i.d. GSoC 2013 Shubhankit +Version: 0.1 +Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ +Author: Shubhankit Mohan +Contributors: Peter Carl, Brian G. Peterson +Depends: + xts, + PerformanceAnalytics +Suggests: + PortfolioAnalytics +Maintainer: Brian G. Peterson +Description: GSoC 2013 project to replicate literature on drawdowns and + non-i.i.d assumptions in finance. +License: GPL-3 +ByteCompile: TRUE +Collate: + 'ACStdDev.annualized.R' + 'CalmarRatio.Normalized.R' + 'CDDopt.R' + 'CDrawdown.R' + 'chart.Autocorrelation.R' + 'EmaxDDGBM.R' + 'GLMSmoothIndex.R' + 'maxDDGBM.R' + 'na.skip.R' + 'Return.GLM.R' + 'table.ComparitiveReturn.GLM.R' + 'table.normDD.R' + 'table.UnsmoothReturn.R' + 'UnsmoothReturn.R' + 'AcarSim.R' + 'CDD.Opt.R' + 'CalmarRatio.Norm.R' + 'SterlingRatio.Norm.R' + 'LoSharpe.R' + 'Return.Okunev.R' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-24 00:07:51 UTC (rev 2870) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-24 09:24:23 UTC (rev 2871) @@ -1,12 +1,19 @@ -export(ACStdDev.annualized) -export(CalmarRatio.Normalized) -export(CDrawdown) -export(chart.Autocorrelation) -export(EMaxDDGBM) -export(GLMSmoothIndex) -export(QP.Norm) -export(SterlingRatio.Normalized) -export(table.ComparitiveReturn.GLM) -export(table.EMaxDDGBM) -export(table.NormDD) -export(table.UnsmoothReturn) +export(AcarSim) +export(ACStdDev.annualized) +export(CalmarRatio.Norm) +export(CalmarRatio.Normalized) +export(CDD.Opt) +export(CDDOpt) +export(CDrawdown) +export(chart.Autocorrelation) +export(EMaxDDGBM) +export(GLMSmoothIndex) +export(LoSharpe) +export(QP.Norm) +export(Return.Okunev) +export(SterlingRatio.Norm) +export(SterlingRatio.Normalized) +export(table.ComparitiveReturn.GLM) +export(table.EMaxDDGBM) +export(table.NormDD) +export(table.UnsmoothReturn) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-24 00:07:51 UTC (rev 2870) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-24 09:24:23 UTC (rev 2871) @@ -1,14 +1,14 @@ -#'@title Getmansky Lo Markov Smoothing Index Parameter +#'@title GLM Index #'@description -#'A useful summary statistic for measuring the concentration of weights is +#'Getmansky Lo Markov Smoothing Index is a useful summary statistic for measuring the concentration of weights is #' a sum of square of Moving Average lag coefficient. #' This measure is well known in the industrial organization literature as the -#' Herfindahl index, a measure of the concentration of firms in a given industry. +#' \bold{ Herfindahl index}, a measure of the concentration of firms in a given industry. #' The index is maximized when one coefficient is 1 and the rest are 0. In the context of -#'smoothed returns, a lower value of x implies more smoothing, and the upper bound -#'of 1 implies no smoothing, hence x is reffered as a ''smoothingindex' '. -#' -#' \deqn{ R_t = \mu + \beta \delta_t+ \xi_t} +#'smoothed returns, a lower value implies more smoothing, and the upper bound +#'of 1 implies no smoothing, hence \eqn{\xi} is reffered as a '\bold{smoothingindex}'. +#'\deqn{ \xi = \sum\theta(j)^2} +#'Where j belongs to 0 to k,which is the number of lag factors input. #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns #' @author Peter Carl, Brian Peterson, Shubhankit Mohan Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/LoSharpe.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/LoSharpe.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/LoSharpe.R 2013-08-24 09:24:23 UTC (rev 2871) @@ -0,0 +1,92 @@ +#'@title Andrew Lo Sharpe Ratio +#'@description +#' Although the Sharpe ratio has become part of the canon of modern financial +#' analysis, its applications typically do not account for the fact that it is an +#' estimated quantity, subject to estimation errors that can be substantial in +#' some cases. +#' +#' Many studies have documented various violations of the assumption of +#' IID returns for financial securities. +#' +#' Under the assumption of stationarity,a version of the Central Limit Theorem can +#' still be applied to the estimator . +#' @details +#' The relationship between SR and SR(q) is somewhat more involved for non- +#'IID returns because the variance of Rt(q) is not just the sum of the variances of component returns but also includes all the covariances. Specifically, under +#' the assumption that returns \eqn{R_t} are stationary, +#' \deqn{ Var[(R_t)] = \sum \sum Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum (q-k)\rho(k) } +#' Where \eqn{ \rho(k) = Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order autocorrelation coefficient of the series of returns.This yields the following relationship between SR and SR(q): +#' and i,j belongs to 0 to q-1 +#'\deqn{SR(q) = \eta(q) } +#'Where : +#' \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 \sum(q-k)\rho(k)] } +#' Where k belongs to 0 to q-1 +#' @param Ra an xts, vector, matrix, data frame, timeSeries or zoo object of +#' daily asset returns +#' @param Rf an xts, vector, matrix, data frame, timeSeries or zoo object of +#' annualized Risk Free Rate +#' @param q Number of autocorrelated lag periods. Taken as 3 (Default) +#' @param \dots any other pass thru parameters +#' @author Brian G. Peterson, Peter Carl, Shubhankit Mohan +#' @references Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ An Econometric Model of Serial Correlation and Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT Sloan Working Paper No. 4288-03; MIT Laboratory for Financial Engineering Working Paper No. LFE-1041A-03; EFMA 2003 Helsinki Meetings. +#'\code{\link[stats]{}} \cr +#' \url{http://ssrn.com/abstract=384700} +#' @keywords ts multivariate distribution models non-iid +#' @examples +#' +#' data(edhec) +#' head(LoSharpe(edhec,0,3) +#' @rdname LoSharpe +#' @export +LoSharpe <- + function (Ra,Rf = 0,q = 3, ...) + { # @author Brian G. Peterson, Peter Carl + + + # Function: + R = checkData(Ra, method="xts") + # Get dimensions and labels + columns.a = ncol(R) + columnnames.a = colnames(R) + # Time used for daily Return manipulations + Time= 252*nyears(edhec) + clean.lo <- function(column.R,q) { + # compute the lagged return series + gamma.k =matrix(0,q) + mu = sum(column.R)/(Time) + Rf= Rf/(Time) + for(i in 1:q){ + lagR = lag(column.R, k=i) + # compute the Momentum Lagged Values + gamma.k[i]= (sum(((column.R-mu)*(lagR-mu)),na.rm=TRUE)) + } + return(gamma.k) + } + neta.lo <- function(pho.k,q) { + # compute the lagged return series + sumq = 0 + for(j in 1:q){ + sumq = sumq+ (q-j)*pho.k[j] + } + return(q/(sqrt(q+2*sumq))) + } + for(column.a in 1:columns.a) { # for each asset passed in as R + # clean the data and get rid of NAs + mu = sum(R[,column.a])/(Time) + sig=sqrt(((R[,column.a]-mu)^2/(Time))) + pho.k = clean.lo(R[,column.a],q)/(as.numeric(sig[1])) + netaq=neta.lo(pho.k,q) + column.lo = (netaq*((mu-Rf)/as.numeric(sig[1]))) + + if(column.a == 1) { lo = column.lo } + else { lo = cbind (lo, column.lo) } + + } + colnames(lo) = columnnames.a + rownames(lo)= paste("Lo Sharpe Ratio") + return(lo) + + + # RESULTS: + + } Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R 2013-08-24 09:24:23 UTC (rev 2871) @@ -0,0 +1,55 @@ +#'@title OW Return Model +#'@description The objective is to determine the true underlying return by removing the +#' autocorrelation structure in the original return series without making any assumptions +#' regarding the actual time series properties of the underlying process. We are +#' implicitly assuming by this approach that the autocorrelations that arise in reported +#'returns are entirely due to the smoothing behavior funds engage in when reporting +#' results. In fact, the method may be adopted to produce any desired +#' level of autocorrelation at any lag and is not limited to simply eliminating all +#'autocorrelations.It can be be said as the general form of Geltner Return Model +#'@details dffd +#' @references "Hedge Fund Risk Factors and Value at Risk of Credit +#' Trading Strategies , John Okunev & Derek White +#' +#' @keywords ts multivariate distribution models +#' @examples +#' +#' data(managers) +#' head(Return.Okunev(managers[,1:3]),n=3) +#' +#' +#' @export + +Return.Okunev<-function(R,q=3) +{ + column.okunev=R + column.okunev <- column.okunev[!is.na(column.okunev)] + for(i in 1:q) + { + lagR = lag(column.okunev, k=i) + column.okunev= (column.okunev-(lagR*quad(lagR,0)))/(1-quad(lagR,0)) + } + return(c(column.okunev)) +} +#' Recusrsive Okunev Call Function +quad <- function(R,d) +{ + coeff = as.numeric(acf(as.numeric(edhec[,1]), plot = FALSE)[1:2][[1]]) +b=-(1+coeff[2]-2*d*coeff[1]) +c=(coeff[1]-d) + ans= (-b-sqrt(b*b-4*c*c))/(2*c) + #a <- a[!is.na(a)] + return(c(ans)) +} +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: Return.Okunev.R 2163 2012-07-16 00:30:19Z braverock $ +# +############################################################################### + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd 2013-08-24 00:07:51 UTC (rev 2870) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd 2013-08-24 09:24:23 UTC (rev 2871) @@ -1,25 +1,23 @@ -\name{table.EMaxDDGBM} -\alias{table.EMaxDDGBM} -\title{Expected Drawdown using Brownian Motion Assumptions} -\usage{ - table.EMaxDDGBM(R, digits = 4) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} -} -\description{ - Works on the model specified by Maddon-Ismail which - investigates the behavior of this statistic for a - Brownian motion with drift. -} -\author{ - Peter Carl, Brian Peterson, Shubhankit Mohan -} -\keyword{Assumptions} -\keyword{Brownian} -\keyword{Drawdown} -\keyword{Expected} -\keyword{Motion} -\keyword{Using} - +\name{EMaxDDGBM} +\alias{EMaxDDGBM} +\title{Expected Drawdown using Brownian Motion Assumptions} +\usage{ + EMaxDDGBM(R, digits = 4) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} +} +\description{ + Works on the model specified by Maddon-Ismail +} +\author{ + R +} +\keyword{Assumptions} +\keyword{Brownian} +\keyword{Drawdown} +\keyword{Expected} +\keyword{Motion} +\keyword{Using} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd 2013-08-24 00:07:51 UTC (rev 2870) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd 2013-08-24 09:24:23 UTC (rev 2871) @@ -1,7 +1,7 @@ \name{GLMSmoothIndex} \alias{GLMSmoothIndex} \alias{Return.Geltner} -\title{Getmansky Lo Markov Smoothing Index Parameter} +\title{GLM Index} \usage{ GLMSmoothIndex(R = NULL, ...) } @@ -10,18 +10,19 @@ or zoo object of asset returns} } \description{ - A useful summary statistic for measuring the - concentration of weights is a sum of square of Moving - Average lag coefficient. This measure is well known in - the industrial organization literature as the Herfindahl - index, a measure of the concentration of firms in a given - industry. The index is maximized when one coefficient is - 1 and the rest are 0. In the context of smoothed returns, - a lower value of x implies more smoothing, and the upper - bound of 1 implies no smoothing, hence x is reffered as a - ''smoothingindex' '. - - \deqn{ R_t = \mu + \beta \delta_t+ \xi_t} + Getmansky Lo Markov Smoothing Index is a useful summary + statistic for measuring the concentration of weights is a + sum of square of Moving Average lag coefficient. This + measure is well known in the industrial organization + literature as the \bold{ Herfindahl index}, a measure of + the concentration of firms in a given industry. The index + is maximized when one coefficient is 1 and the rest are + 0. In the context of smoothed returns, a lower value + implies more smoothing, and the upper bound of 1 implies + no smoothing, hence \eqn{\xi} is reffered as a + '\bold{smoothingindex}'. \deqn{ \xi = \sum\theta(j)^2} + Where j belongs to 0 to k,which is the number of lag + factors input. } \examples{ data(edhec) Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd 2013-08-24 09:24:23 UTC (rev 2871) @@ -0,0 +1,70 @@ +\name{LoSharpe} +\alias{LoSharpe} +\title{Andrew Lo Sharpe Ratio} +\usage{ + LoSharpe(Ra, Rf = 0, q = 3, ...) +} +\arguments{ + \item{Ra}{an xts, vector, matrix, data frame, timeSeries + or zoo object of daily asset returns} + + \item{Rf}{an xts, vector, matrix, data frame, timeSeries + or zoo object of annualized Risk Free Rate} + + \item{q}{Number of autocorrelated lag periods. Taken as 3 + (Default)} + + \item{\dots}{any other pass thru parameters} +} +\description{ + Although the Sharpe ratio has become part of the canon of + modern financial analysis, its applications typically do + not account for the fact that it is an estimated + quantity, subject to estimation errors that can be + substantial in some cases. + + Many studies have documented various violations of the + assumption of IID returns for financial securities. + + Under the assumption of stationarity,a version of the + Central Limit Theorem can still be applied to the + estimator . +} +\details{ + The relationship between SR and SR(q) is somewhat more + involved for non- IID returns because the variance of + Rt(q) is not just the sum of the variances of component + returns but also includes all the covariances. + Specifically, under the assumption that returns \eqn{R_t} + are stationary, \deqn{ Var[(R_t)] = \sum \sum + Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum + (q-k)\rho(k) } Where \eqn{ \rho(k) = + Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order + autocorrelation coefficient of the series of returns.This + yields the following relationship between SR and SR(q): + and i,j belongs to 0 to q-1 \deqn{SR(q) = \eta(q) } Where + : \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 + \sum(q-k)\rho(k)] } Where k belongs to 0 to q-1 +} +\examples{ +data(edhec) +head(LoSharpe(edhec,0,3) +} +\author{ + Brian G. Peterson, Peter Carl, Shubhankit Mohan +} +\references{ + Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ + An Econometric Model of Serial Correlation and + Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT + Sloan Working Paper No. 4288-03; MIT Laboratory for + Financial Engineering Working Paper No. LFE-1041A-03; + EFMA 2003 Helsinki Meetings. \code{\link[stats]{}} \cr + \url{http://ssrn.com/abstract=384700} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{non-iid} +\keyword{ts} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-24 00:07:51 UTC (rev 2870) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-24 09:24:23 UTC (rev 2871) @@ -1,47 +1,47 @@ -\name{Return.GLM} -\alias{Return.GLM} -\title{GLM Return Model} -\usage{ - Return.GLM(edhec,4) -} -\arguments{ - \item{Ra}{: an xts, vector, matrix, data frame, - timeSeries or zoo object of asset returns} - - \item{q}{: order of autocorrelation coefficient lag - factors} -} -\description{ - True returns represent the flow of information that would - determine the equilibrium value of the fund's securities - in a frictionless market. However, true economic returns - are not observed. The returns to hedge funds and other - alternative investments are often highly serially - correlated.We propose an econometric model of return - smoothingand develop estimators for the smoothing pro?le - as well as a smoothing-adjusted Sharpe ratio. -} -\details{ - To quantify the impact of all of these possible sources - of serial correlation, denote by R(t) the true economic - return of a hedge fund in period 't'; and let R(t) - satisfy the following linear single-factor model: where: - \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + - \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} where - \eqn{\theta}'i is defined as the weighted lag of - autocorrelated lag and whose sum is 1. -} -\author{ - Brian Peterson,Peter Carl, Shubhankit Mohan -} -\references{ - Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An - econometric model of serial correlation and and - illiquidity in hedge fund Returns},Journal of Financial - Economics 74 (2004). -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - +\name{Return.GLM} +\alias{Return.GLM} +\title{GLM Return Model} +\usage{ + Return.GLM(edhec,4) +} +\arguments{ + \item{Ra}{: an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns} + + \item{q}{: order of autocorrelation coefficient lag + factors} +} +\description{ + True returns represent the flow of information that would + determine the equilibrium value of the fund's securities + in a frictionless market. However, true economic returns + are not observed. The returns to hedge funds and other + alternative investments are often highly serially + correlated.We propose an econometric model of return + smoothingand develop estimators for the smoothing + pro?le as well as a smoothing-adjusted Sharpe ratio. +} +\details{ + To quantify the impact of all of these possible sources + of serial correlation, denote by R(t) the true economic + return of a hedge fund in period 't'; and let R(t) + satisfy the following linear single-factor model: where: + \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + + \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} where + \eqn{\theta}'i is defined as the weighted lag of + autocorrelated lag and whose sum is 1. +} +\author{ + Brian Peterson,Peter Carl, Shubhankit Mohan +} +\references{ + Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An + econometric model of serial correlation and and + illiquidity in hedge fund Returns},Journal of Financial + Economics 74 (2004). +} +\keyword{distribution} +\keyword{model} +\keyword{multivariate} +\keyword{ts} + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd 2013-08-24 09:24:23 UTC (rev 2871) @@ -0,0 +1,36 @@ +\name{Return.Okunev} +\alias{Return.Okunev} +\title{OW Return Model} +\usage{ + Return.Okunev(R, q = 3) +} +\description{ + The objective is to determine the true underlying return + by removing the autocorrelation structure in the original + return series without making any assumptions regarding + the actual time series properties of the underlying + process. We are implicitly assuming by this approach that + the autocorrelations that arise in reported returns are + entirely due to the smoothing behavior funds engage in + when reporting results. In fact, the method may be + adopted to produce any desired level of autocorrelation + at any lag and is not limited to simply eliminating all + autocorrelations.It can be be said as the general form of + Geltner Return Model +} +\details{ + dffd +} +\examples{ +data(managers) +head(Return.Okunev(managers[,1:3]),n=3) +} +\references{ + "Hedge Fund Risk Factors and Value at Risk of Credit + Trading Strategies , John Okunev & Derek White +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd 2013-08-24 09:24:23 UTC (rev 2871) @@ -0,0 +1,10 @@ +\name{quad} +\alias{quad} +\title{Recusrsive Okunev Call Function} +\usage{ + quad(R, d) +} +\description{ + Recusrsive Okunev Call Function +} + From noreply at r-forge.r-project.org Sat Aug 24 14:19:30 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 14:19:30 +0200 (CEST) Subject: [Returnanalytics-commits] r2872 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . R man Message-ID: <20130824121931.0DD7B184CB9@r-forge.r-project.org> Author: shubhanm Date: 2013-08-24 14:19:30 +0200 (Sat, 24 Aug 2013) New Revision: 2872 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/SterlingRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/SterlingRatio.Norm.Rd Log: /.Rd file detailed info Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-24 12:19:30 UTC (rev 2872) @@ -10,6 +10,7 @@ export(GLMSmoothIndex) export(LoSharpe) export(QP.Norm) +export(Return.GLM) export(Return.Okunev) export(SterlingRatio.Norm) export(SterlingRatio.Normalized) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/ACStdDev.annualized.R 2013-08-24 12:19:30 UTC (rev 2872) @@ -1,5 +1,8 @@ #' @title Autocorrleation adjusted Standard Deviation #' @description Incorporating the component of lagged autocorrelation factor into adjusted time scale standard deviation translation +#' @details Given a sample of historical returns R(1),R(2), . . .,R(T),the method assumes the fund manager smooths returns in the following manner, when 't' is the unit time interval: +#' The square root time translation can be defined as : +#' \deqn{ \sigma(T) = T \sqrt\sigma(t)} #' @aliases sd.multiperiod sd.annualized StdDev.annualized #' @param x an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns @@ -8,8 +11,8 @@ #' 12, quarterly scale = 4) #' @param \dots any other passthru parameters #' @author Peter Carl,Brian Peterson, Shubhankit Mohan -#' @seealso \code{\link[stats]{sd}} \cr -#' \url{http://wikipedia.org/wiki/inverse-square_law} +#' @seealso \code{\link[stats]{sd}} \cr \code{\link[stats]{stdDev.annualized}} \cr +#' \url{http://en.wikipedia.org/wiki/Volatility_(finance)} #' @references Burghardt, G., and L. Liu, \emph{ It's the Autocorrelation, Stupid (November 2012) Newedge #' working paper.} #' \code{\link[stats]{}} \cr Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R 2013-08-24 12:19:30 UTC (rev 2872) @@ -1,14 +1,17 @@ -#' @title Normalized Calmar reward/risk ratio +#' @title Normalized Calmar ratio #' #' @description Normalized Calmar and Sterling Ratios are yet another method of creating a #' risk-adjusted measure for ranking investments similar to the Sharpe Ratio. #' #' @details #' Both the Normalized Calmar and the Sterling ratio are the ratio of annualized return -#' over the absolute value of the maximum drawdown of an investment. The -#' Sterling ratio adds an excess risk measure to the maximum drawdown, -#' traditionally and defaulting to 10%. -#' +#' over the absolute value of the maximum drawdown of an investment. +#' \deqn{Sterling Ratio = [Return over (0,T)]/[max Drawdown(0,T)]} +#' It is also \emph{traditional} to use a three year return series for these +#' calculations, although the functions included here make no effort to +#' determine the length of your series. If you want to use a subset of your +#' series, you'll need to truncate or subset the input data to the desired +#' length. #' It is also traditional to use a three year return series for these #' calculations, although the functions included here make no effort to #' determine the length of your series. If you want to use a subset of your Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-24 12:19:30 UTC (rev 2872) @@ -13,8 +13,7 @@ #' asset returns #' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @aliases Return.Geltner -#' @references "An econometric model of serial correlation and illiquidity in -#' hedge fund returns" Mila Getmansky, Andrew W. Lo, Igor Makarov +#' @references \emph{Getmansky, Mila, Lo, Andrew W. and Makarov, Igor} An Econometric Model of Serial Correlation and Illiquidity in Hedge Fund Returns (March 1, 2003). MIT Sloan Working Paper No. 4288-03; MIT Laboratory for Financial Engineering Working Paper No. LFE-1041A-03; EFMA 2003 Helsinki Meetings. Available at SSRN: \url{http://ssrn.com/abstract=384700} #' #' @keywords ts multivariate distribution models non-iid #' @examples Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.GLM.R 2013-08-24 12:19:30 UTC (rev 2872) @@ -2,8 +2,8 @@ #' @description True returns represent the flow of information that would determine the equilibrium #' value of the fund's securities in a frictionless market. However, true economic #' returns are not observed. The returns to hedge funds and other alternative investments are often -#' highly serially correlated.We propose an econometric model of return smoothingand develop estimators for the smoothing -#' pro?le as well as a smoothing-adjusted Sharpe ratio. +#' highly serially correlated.We propose an econometric model of return smoothing and \emph{develop estimators for the smoothing +#' profile as well as a smoothing-adjusted Sharpe ratio}. #' @usage #' Return.GLM(edhec,4) #' @usage @@ -19,11 +19,19 @@ #' the true economic return of a hedge fund in period 't'; and let R(t) satisfy the following linear #' single-factor model: where: #' \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} -#' where \eqn{\theta}'i is defined as the weighted lag of autocorrelated lag and whose sum is 1. +#' Where : \eqn{\theta}'i is defined as the weighted lag of autocorrelated lag and whose sum is 1. +#' \deqn{\theta (j) \epsilon [0,1] where : j = 0,1,....,k } +#' and, +#' \deqn{\theta _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1} +#'Using the methods outlined above , the paper estimates the smoothing model +#' using maximumlikelihood procedure-programmed in Matlab using the Optimization Toolbox andreplicated in Stata usingits MA(k) estimation routine.Using Time seseries analysis and computational finance("\bold{tseries}") library , we fit an it an \bold{ARMA} model to a univariate time series by conditional least squares. For exact maximum likelihood estimation,arima0 from package \bold{stats} can be used. +#' #' @author Brian Peterson,Peter Carl, Shubhankit Mohan #' @references Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An econometric model of serial correlation and -#' and illiquidity in hedge fund Returns},Journal of Financial Economics 74 (2004). +#' and illiquidity in hedge fund Returns},Journal of Financial Economics 74 (2004).\url{ http://ssrn.com/abstract=384700} #' @keywords ts multivariate distribution model +#' @seealso Return.Geltner +#' @export Return.GLM <- function (Ra,q=3) { # @author Brian G. Peterson, Peter Carl Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R 2013-08-24 12:19:30 UTC (rev 2872) @@ -7,10 +7,21 @@ #' results. In fact, the method may be adopted to produce any desired #' level of autocorrelation at any lag and is not limited to simply eliminating all #'autocorrelations.It can be be said as the general form of Geltner Return Model -#'@details dffd -#' @references "Hedge Fund Risk Factors and Value at Risk of Credit -#' Trading Strategies , John Okunev & Derek White -#' +#'@details +#'Given a sample of historical returns \eqn{R(1),R(2), . . .,R(T)},the method assumes the fund manager smooths returns in the following manner: +#' \deqn{ r(0,t) = \sum \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } +#' Where :\deqn{ \sum \beta (i) = (1- \alpha) } +#' \bold{r(0,t)} : is the observed (reported) return at time t (with 0 adjustments to reported returns), +#'\bold{r(m,t)} : is the true underlying (unreported) return at time t (determined by making m adjustments to reported returns). +#' +#'To remove the \bold{first m orders} of autocorrelation from a given return series we would proceed in a manner very similar to that detailed in \bold{ \code{\link{Return.Geltner}} \cr}. We would initially remove the first order autocorrelation, then proceed to eliminate the second order autocorrelation through the iteration process. In general, to remove any order, m, autocorrelations from a given return series we would make the following transformation to returns: +#' autocorrelation structure in the original return series without making any assumptions regarding the actual time series properties of the underlying process. We are implicitly assuming by this approach that the autocorrelations that arise in reported returns are entirely due to the smoothing behavior funds engage in when reporting results. In fact, the method may be adopted to produce any desired level of autocorrelation at any lag and is not limited to simply eliminating all autocorrelations. +#' +#' +#' @references Okunev, John and White, Derek R., \emph{ Hedge Fund Risk Factors and Value at Risk of Credit Trading Strategies} (October 2003). +#' Available at SSRN: \url{http://ssrn.com/abstract=460641} +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @seealso \code{\link{Return.Geltner}} \cr #' @keywords ts multivariate distribution models #' @examples #' @@ -19,7 +30,6 @@ #' #' #' @export - Return.Okunev<-function(R,q=3) { column.okunev=R Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/SterlingRatio.Norm.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/SterlingRatio.Norm.R 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/SterlingRatio.Norm.R 2013-08-24 12:19:30 UTC (rev 2872) @@ -1,21 +1,21 @@ -#' @title Normalized Sterling reward/risk ratio +#' @title Normalized Sterling Ratio #' -#' @description Normalized Sterling and Sterling Ratios are yet another method of creating a +#' @description Normalized Sterling Ratio is another method of creating a #' risk-adjusted measure for ranking investments similar to the Sharpe Ratio. #' #' @details #' Both the Normalized Sterling and the Calmar ratio are the ratio of annualized return #' over the absolute value of the maximum drawdown of an investment. The -#' Sterling ratio adds an excess risk measure to the maximum drawdown, -#' traditionally and defaulting to 10%. +#' Sterling ratio adds an \bold{excess risk} measure to the maximum drawdown, +#' traditionally and defaulting to 10\%. #' -#' It is also traditional to use a three year return series for these +#' \deqn{Sterling Ratio = [Return over (0,T)]/[max Drawdown(0,T) - 10%]} +#' It is also \emph{traditional} to use a three year return series for these #' calculations, although the functions included here make no effort to #' determine the length of your series. If you want to use a subset of your #' series, you'll need to truncate or subset the input data to the desired #' length. -#' -#' +#' Malik Magdon-Ismail impmemented a sclaing law for different \eqn{\mu ,\sigma and T} #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns #' @param scale number of periods in a year (daily scale = 252, monthly scale = @@ -23,7 +23,8 @@ #' @param excess for Sterling Ratio, excess amount to add to the max drawdown, #' traditionally and default .1 (10\%) #' @author Brian G. Peterson , Peter Carl , Shubhankit Mohan -#' @references Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct 2004. +#' @references Bacon, Carl, Magdon-Ismail, M. and Amir Atiya,\emph{ Maximum drawdown. Risk Magazine,} 01 Oct 2004. +#' \url{http://www.cs.rpi.edu/~magdon/talks/mdd_NYU04.pdf} #' @keywords ts multivariate distribution models #' @examples #' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd 2013-08-24 12:19:30 UTC (rev 2872) @@ -24,6 +24,13 @@ factor into adjusted time scale standard deviation translation } +\details{ + Given a sample of historical returns R(1),R(2), . . + .,R(T),the method assumes the fund manager smooths + returns in the following manner, when 't' is the unit + time interval: The square root time translation can be + defined as : \deqn{ \sigma(T) = T \sqrt\sigma(t)} +} \author{ Peter Carl,Brian Peterson, Shubhankit Mohan } @@ -35,7 +42,8 @@ } \seealso{ \code{\link[stats]{sd}} \cr - \url{http://wikipedia.org/wiki/inverse-square_law} + \code{\link[stats]{stdDev.annualized}} \cr + \url{http://en.wikipedia.org/wiki/Volatility_(finance)} } \keyword{distribution} \keyword{models} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd 2013-08-24 12:19:30 UTC (rev 2872) @@ -1,6 +1,6 @@ \name{CalmarRatio.Norm} \alias{CalmarRatio.Norm} -\title{Normalized Calmar reward/risk ratio} +\title{Normalized Calmar ratio} \usage{ CalmarRatio.Norm(R, tau = 1, scale = NA) } @@ -22,14 +22,17 @@ \details{ Both the Normalized Calmar and the Sterling ratio are the ratio of annualized return over the absolute value of the - maximum drawdown of an investment. The Sterling ratio - adds an excess risk measure to the maximum drawdown, - traditionally and defaulting to 10%. - - It is also traditional to use a three year return series - for these calculations, although the functions included - here make no effort to determine the length of your - series. If you want to use a subset of your series, + maximum drawdown of an investment. \deqn{Sterling Ratio = + [Return over (0,T)]/[max Drawdown(0,T)]} It is also + \emph{traditional} to use a three year return series for + these calculations, although the functions included here + make no effort to determine the length of your series. + If you want to use a subset of your series, you'll need + to truncate or subset the input data to the desired + length. It is also traditional to use a three year return + series for these calculations, although the functions + included here make no effort to determine the length of + your series. If you want to use a subset of your series, you'll need to truncate or subset the input data to the desired length. } Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd 2013-08-24 12:19:30 UTC (rev 2872) @@ -1,77 +1,77 @@ -\name{QP.Norm} -\alias{Normalized.CalmarRatio} -\alias{Normalized.SterlingRatio} -\alias{QP.Norm} -\alias{SterlingRatio.Normalized} -\title{QP function fo calculation of Sharpe Ratio} -\usage{ - QP.Norm(R, tau, scale = NA) - - SterlingRatio.Normalized(R, tau = 1, scale = NA, - excess = 0.1) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{scale}{number of periods in a year (daily scale = - 252, monthly scale = 12, quarterly scale = 4)} - - \item{excess}{for Sterling Ratio, excess amount to add to - the max drawdown, traditionally and default .1 (10\%)} -} -\description{ - calculate a Normalized Calmar or Sterling reward/risk - ratio -} -\details{ - Normalized Calmar and Sterling Ratios are yet another - method of creating a risk-adjusted measure for ranking - investments similar to the \code{\link{SharpeRatio}}. - - Both the Normalized Calmar and the Sterling ratio are the - ratio of annualized return over the absolute value of the - maximum drawdown of an investment. The Sterling ratio - adds an excess risk measure to the maximum drawdown, - traditionally and defaulting to 10\%. - - It is also traditional to use a three year return series - for these calculations, although the functions included - here make no effort to determine the length of your - series. If you want to use a subset of your series, - you'll need to truncate or subset the input data to the - desired length. - - Many other measures have been proposed to do similar - reward to risk ranking. It is the opinion of this author - that newer measures such as Sortino's - \code{\link{UpsidePotentialRatio}} or Favre's modified - \code{\link{SharpeRatio}} are both \dQuote{better} - measures, and should be preferred to the Calmar or - Sterling Ratio. -} -\examples{ -data(managers) - Normalized.CalmarRatio(managers[,1,drop=FALSE]) - Normalized.CalmarRatio(managers[,1:6]) - Normalized.SterlingRatio(managers[,1,drop=FALSE]) - Normalized.SterlingRatio(managers[,1:6]) -} -\author{ - Brian G. Peterson -} -\references{ - Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, - Maximum drawdown. Risk Magazine, 01 Oct 2004. -} -\seealso{ - \code{\link{Return.annualized}}, \cr - \code{\link{maxDrawdown}}, \cr - \code{\link{SharpeRatio.modified}}, \cr - \code{\link{UpsidePotentialRatio}} -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - +\name{QP.Norm} +\alias{Normalized.CalmarRatio} +\alias{Normalized.SterlingRatio} +\alias{QP.Norm} +\alias{SterlingRatio.Normalized} +\title{QP function fo calculation of Sharpe Ratio} +\usage{ + QP.Norm(R, tau, scale = NA) + + SterlingRatio.Normalized(R, tau = 1, scale = NA, + excess = 0.1) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{scale}{number of periods in a year (daily scale = + 252, monthly scale = 12, quarterly scale = 4)} + + \item{excess}{for Sterling Ratio, excess amount to add to + the max drawdown, traditionally and default .1 (10\%)} +} +\description{ + calculate a Normalized Calmar or Sterling reward/risk + ratio +} +\details{ + Normalized Calmar and Sterling Ratios are yet another + method of creating a risk-adjusted measure for ranking + investments similar to the \code{\link{SharpeRatio}}. + + Both the Normalized Calmar and the Sterling ratio are the + ratio of annualized return over the absolute value of the + maximum drawdown of an investment. The Sterling ratio + adds an excess risk measure to the maximum drawdown, + traditionally and defaulting to 10\%. + + It is also traditional to use a three year return series + for these calculations, although the functions included + here make no effort to determine the length of your + series. If you want to use a subset of your series, + you'll need to truncate or subset the input data to the + desired length. + + Many other measures have been proposed to do similar + reward to risk ranking. It is the opinion of this author + that newer measures such as Sortino's + \code{\link{UpsidePotentialRatio}} or Favre's modified + \code{\link{SharpeRatio}} are both \dQuote{better} + measures, and should be preferred to the Calmar or + Sterling Ratio. +} +\examples{ +data(managers) + Normalized.CalmarRatio(managers[,1,drop=FALSE]) + Normalized.CalmarRatio(managers[,1:6]) + Normalized.SterlingRatio(managers[,1,drop=FALSE]) + Normalized.SterlingRatio(managers[,1:6]) +} +\author{ + Brian G. Peterson +} +\references{ + Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, + Maximum drawdown. Risk Magazine, 01 Oct 2004. +} +\seealso{ + \code{\link{Return.annualized}}, \cr + \code{\link{maxDrawdown}}, \cr + \code{\link{SharpeRatio.modified}}, \cr + \code{\link{UpsidePotentialRatio}} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd 2013-08-24 12:19:30 UTC (rev 2872) @@ -32,9 +32,13 @@ Peter Carl, Brian Peterson, Shubhankit Mohan } \references{ - "An econometric model of serial correlation and - illiquidity in hedge fund returns" Mila Getmansky, Andrew - W. Lo, Igor Makarov + \emph{Getmansky, Mila, Lo, Andrew W. and Makarov, Igor} + An Econometric Model of Serial Correlation and + Illiquidity in Hedge Fund Returns (March 1, 2003). MIT + Sloan Working Paper No. 4288-03; MIT Laboratory for + Financial Engineering Working Paper No. LFE-1041A-03; + EFMA 2003 Helsinki Meetings. Available at SSRN: + \url{http://ssrn.com/abstract=384700} } \keyword{distribution} \keyword{models} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd 2013-08-24 12:19:30 UTC (rev 2872) @@ -18,8 +18,8 @@ are not observed. The returns to hedge funds and other alternative investments are often highly serially correlated.We propose an econometric model of return - smoothingand develop estimators for the smoothing - pro?le as well as a smoothing-adjusted Sharpe ratio. + smoothing and \emph{develop estimators for the smoothing + profile as well as a smoothing-adjusted Sharpe ratio}. } \details{ To quantify the impact of all of these possible sources @@ -27,9 +27,20 @@ return of a hedge fund in period 't'; and let R(t) satisfy the following linear single-factor model: where: \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + - \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} where + \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} Where : \eqn{\theta}'i is defined as the weighted lag of - autocorrelated lag and whose sum is 1. + autocorrelated lag and whose sum is 1. \deqn{\theta (j) + \epsilon [0,1] where : j = 0,1,....,k } and, \deqn{\theta + _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1} Using + the methods outlined above , the paper estimates the + smoothing model using maximumlikelihood + procedure-programmed in Matlab using the Optimization + Toolbox andreplicated in Stata usingits MA(k) estimation + routine.Using Time seseries analysis and computational + finance("\bold{tseries}") library , we fit an it an + \bold{ARMA} model to a univariate time series by + conditional least squares. For exact maximum likelihood + estimation,arima0 from package \bold{stats} can be used. } \author{ Brian Peterson,Peter Carl, Shubhankit Mohan @@ -38,8 +49,12 @@ Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An econometric model of serial correlation and and illiquidity in hedge fund Returns},Journal of Financial - Economics 74 (2004). + Economics 74 (2004).\url{ + http://ssrn.com/abstract=384700} } +\seealso{ + Return.Geltner +} \keyword{distribution} \keyword{model} \keyword{multivariate} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd 2013-08-24 12:19:30 UTC (rev 2872) @@ -19,16 +19,51 @@ Geltner Return Model } \details{ - dffd + Given a sample of historical returns \eqn{R(1),R(2), . . + .,R(T)},the method assumes the fund manager smooths + returns in the following manner: \deqn{ r(0,t) = \sum + \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } Where :\deqn{ + \sum \beta (i) = (1- \alpha) } \bold{r(0,t)} : is the + observed (reported) return at time t (with 0 adjustments + to reported returns), \bold{r(m,t)} : is the true + underlying (unreported) return at time t (determined by + making m adjustments to reported returns). + + To remove the \bold{first m orders} of autocorrelation + from a given return series we would proceed in a manner + very similar to that detailed in \bold{ + \code{\link{Return.Geltner}} \cr}. We would initially + remove the first order autocorrelation, then proceed to + eliminate the second order autocorrelation through the + iteration process. In general, to remove any order, m, + autocorrelations from a given return series we would make + the following transformation to returns: autocorrelation + structure in the original return series without making + any assumptions regarding the actual time series + properties of the underlying process. We are implicitly + assuming by this approach that the autocorrelations that + arise in reported returns are entirely due to the + smoothing behavior funds engage in when reporting + results. In fact, the method may be adopted to produce + any desired level of autocorrelation at any lag and is + not limited to simply eliminating all autocorrelations. } \examples{ data(managers) head(Return.Okunev(managers[,1:3]),n=3) } +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan +} \references{ - "Hedge Fund Risk Factors and Value at Risk of Credit - Trading Strategies , John Okunev & Derek White + Okunev, John and White, Derek R., \emph{ Hedge Fund Risk + Factors and Value at Risk of Credit Trading Strategies} + (October 2003). Available at SSRN: + \url{http://ssrn.com/abstract=460641} } +\seealso{ + \code{\link{Return.Geltner}} \cr +} \keyword{distribution} \keyword{models} \keyword{multivariate} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/SterlingRatio.Norm.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/SterlingRatio.Norm.Rd 2013-08-24 09:24:23 UTC (rev 2871) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/SterlingRatio.Norm.Rd 2013-08-24 12:19:30 UTC (rev 2872) @@ -1,6 +1,6 @@ \name{SterlingRatio.Norm} \alias{SterlingRatio.Norm} -\title{Normalized Sterling reward/risk ratio} +\title{Normalized Sterling Ratio} \usage{ SterlingRatio.Norm(R, tau = 1, scale = NA, excess = 0.1) } @@ -15,23 +15,26 @@ the max drawdown, traditionally and default .1 (10\%)} } \description{ - Normalized Sterling and Sterling Ratios are yet another - method of creating a risk-adjusted measure for ranking - investments similar to the Sharpe Ratio. + Normalized Sterling Ratio is another method of creating a + risk-adjusted measure for ranking investments similar to + the Sharpe Ratio. } \details{ Both the Normalized Sterling and the Calmar ratio are the ratio of annualized return over the absolute value of the maximum drawdown of an investment. The Sterling ratio - adds an excess risk measure to the maximum drawdown, - traditionally and defaulting to 10%. + adds an \bold{excess risk} measure to the maximum + drawdown, traditionally and defaulting to 10\%. - It is also traditional to use a three year return series - for these calculations, although the functions included - here make no effort to determine the length of your - series. If you want to use a subset of your series, - you'll need to truncate or subset the input data to the - desired length. + \deqn{Sterling Ratio = [Return over (0,T)]/[max + Drawdown(0,T) - 10%]} It is also \emph{traditional} to + use a three year return series for these calculations, + although the functions included here make no effort to + determine the length of your series. If you want to use + a subset of your series, you'll need to truncate or + subset the input data to the desired length. Malik + Magdon-Ismail impmemented a sclaing law for different + \eqn{\mu ,\sigma and T} } \examples{ data(managers) @@ -42,8 +45,9 @@ Brian G. Peterson , Peter Carl , Shubhankit Mohan } \references{ - Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, - Maximum drawdown. Risk Magazine, 01 Oct 2004. + Bacon, Carl, Magdon-Ismail, M. and Amir Atiya,\emph{ + Maximum drawdown. Risk Magazine,} 01 Oct 2004. + \url{http://www.cs.rpi.edu/~magdon/talks/mdd_NYU04.pdf} } \keyword{distribution} \keyword{models} From noreply at r-forge.r-project.org Sat Aug 24 19:48:09 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 19:48:09 +0200 (CEST) Subject: [Returnanalytics-commits] r2873 - in pkg/PortfolioAnalytics: R man Message-ID: <20130824174809.CE9AF1850D1@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-24 19:48:09 +0200 (Sat, 24 Aug 2013) New Revision: 2873 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd Log: Modifying chart.Weights.EF based on weightsPlot in fPortfolio package. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-24 12:19:30 UTC (rev 2872) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-24 17:48:09 UTC (rev 2873) @@ -183,35 +183,62 @@ #' #' @param object object of class 'efficient.frontier' created by \code{\link{create.EfficientFrontier}}. #' @param colorset color palette to use. -#' @param ... passthrough parameters to \code{chart.StackedBar}. +#' @param ... passthrough parameters to \code{barplot}. #' @param match.col match.col string name of column to use for risk (horizontal axis). #' Must match the name of an objective. #' @param main main title used in the plot. -#' @param las sets the orientation of the axis labels, as described in \code{\link{par}}. #' @param cex.lab The magnification to be used for x- and y-axis labels relative to the current setting of 'cex'. #' @param cex.axis The magnification to be used for sizing the axis text relative to the current setting of 'cex', similar to \code{\link{plot}}. #' @param cex.legend The magnification to be used for sizing the legend relative to the current setting of 'cex', similar to \code{\link{plot}}. -#' @param legend.loc places a legend into a location on the chart similar to \code{\link{chart.TimeSeries}}. The default, "under," is the only location currently implemented for this chart. Use 'NULL' to remove the legend. #' @param legend.labels character vector to use for the legend labels #' @param element.color provides the color for drawing less-important chart elements, such as the box lines, axis lines, etc. #' @author Ross Bennett #' @export -chart.Weights.EF <- function(object, colorset=NULL, ..., match.col="ES", main="EF Weights", las=1, cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.loc="under", legend.labels=NULL, element.color="darkgray"){ +chart.Weights.EF <- function(object, colorset=NULL, ..., match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray"){ + # using ideas from weightsPlot.R in fPortfolio package + if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") frontier <- object$frontier + # get the columns with weights cnames <- colnames(frontier) wts_idx <- grep(pattern="^w\\.", cnames) wts <- frontier[, wts_idx] - if(!is.null(legend.labels)){ - # use legend.labels passed in by user - colnames(wts) <- legend.labels - } else { - # remove w. from the column names - colnames(wts) <- gsub(pattern="^w\\.", replacement="", cnames[wts_idx]) + # compute the weights for the barplot + pos.weights <- +0.5 * (abs(wts) + wts) + neg.weights <- -0.5 * (abs(wts) - wts) + + # Define Plot Range: + ymax <- max(rowSums(pos.weights)) + ymin <- min(rowSums(neg.weights)) + range <- ymax - ymin + ymax <- ymax + 0.005 * range + ymin <- ymin - 0.005 * range + dim <- dim(wts) + range <- dim[1] + xmin <- 0 + xmax <- range + 0.2 * range + + # set the colorset if no colorset is passed in + if(is.null(colorset)) + colorset <- 1:dim[2] + + # plot the positive weights + barplot(t(pos.weights), col = colorset, space = 0, ylab = "", + xlim = c(xmin, xmax), ylim = c(ymin, ymax), + border = element.color, cex.axis=cex.axis, ...) + + # set the legend information + if(is.null(legend.labels)){ + legend.labels <- gsub(pattern="^w\\.", replacement="", cnames[wts_idx]) } + legend("topright", legend = legend.labels, bty = "n", cex = cex.legend, fill = colorset) + # plot the negative weights + barplot(t(neg.weights), col = colorset, space = 0, add = TRUE, border = element.color, cex.axis=cex.axis, axes=FALSE, ...) + + # return along the efficient frontier # get the "mean" column mean.mtc <- pmatch("mean", cnames) if(is.na(mean.mtc)) { @@ -219,6 +246,7 @@ } if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") + # risk along the efficient frontier # get the match.col column mtc <- pmatch(match.col, cnames) if(is.na(mtc)) { @@ -226,13 +254,23 @@ } if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") - # plot the 'match.col' (risk) as the x-axis labels - xlabels <- round(frontier[, mtc], 4) + # Add labels + ef.return <- frontier[, mean.mtc] + ef.risk <- frontier[, mtc] + n.risk <- length(ef.risk) + n.labels <- 6 + M <- c(0, ( 1:(n.risk %/% n.labels) ) ) * n.labels + 1 + # use 3 significant digits + axis(3, at = M, labels = signif(ef.risk[M], 3), cex.axis=cex.axis) + axis(1, at = M, labels = signif(ef.return[M], 3), cex.axis=cex.axis) - chart.StackedBar(w=wts, colorset=colorset, ..., main=main, las=las, space=0, - cex.lab=cex.lab, cex.axis=cex.axis, cex.legend=cex.legend, - legend.loc=legend.loc, element.color=element.color, - xaxis.labels=xlabels, xlab=match.col, ylab="Weights") + # axis labels and titles + mtext("Risk", side = 3, line = 2, adj = 1, cex = cex.lab) + mtext("Return", side = 1, line = 2, adj = 1, cex = cex.lab) + mtext("Weight", side = 2, line = 2, adj = 1, cex = cex.lab) + # add title + mtext(main, adj = 0, line = 2.5, font = 2, cex = 0.8) + box(col=element.color) } #' @rdname chart.EfficientFrontier Modified: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-24 12:19:30 UTC (rev 2872) +++ pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-24 17:48:09 UTC (rev 2873) @@ -3,9 +3,8 @@ \title{chart the weights along the efficient frontier} \usage{ chart.Weights.EF(object, colorset = NULL, ..., - match.col = "ES", main = "EF Weights", las = 1, - cex.lab = 0.8, cex.axis = 0.8, cex.legend = 0.8, - legend.loc = "under", legend.labels = NULL, + match.col = "ES", main = "EF Weights", cex.lab = 0.8, + cex.axis = 0.8, cex.legend = 0.8, legend.labels = NULL, element.color = "darkgray") } \arguments{ @@ -14,8 +13,7 @@ \item{colorset}{color palette to use.} - \item{...}{passthrough parameters to - \code{chart.StackedBar}.} + \item{...}{passthrough parameters to \code{barplot}.} \item{match.col}{match.col string name of column to use for risk (horizontal axis). Must match the name of an @@ -23,9 +21,6 @@ \item{main}{main title used in the plot.} - \item{las}{sets the orientation of the axis labels, as - described in \code{\link{par}}.} - \item{cex.lab}{The magnification to be used for x- and y-axis labels relative to the current setting of 'cex'.} @@ -37,12 +32,6 @@ the legend relative to the current setting of 'cex', similar to \code{\link{plot}}.} - \item{legend.loc}{places a legend into a location on the - chart similar to \code{\link{chart.TimeSeries}}. The - default, "under," is the only location currently - implemented for this chart. Use 'NULL' to remove the - legend.} - \item{legend.labels}{character vector to use for the legend labels} From noreply at r-forge.r-project.org Sat Aug 24 19:52:50 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 19:52:50 +0200 (CEST) Subject: [Returnanalytics-commits] r2874 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: R man Message-ID: <20130824175250.BE510185C75@r-forge.r-project.org> Author: shubhanm Date: 2013-08-24 19:52:50 +0200 (Sat, 24 Aug 2013) New Revision: 2874 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDD.Opt.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.ComparitiveReturn.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.NormDD.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.UnsmoothReturn.Rd Log: /.Rd Completed Documentation Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R 2013-08-24 17:52:50 UTC (rev 2874) @@ -1,16 +1,21 @@ -#' @title Acar and Shane Maximum Loss +#' @title Acar-Shane Maximum Loss Plot #' #'@description To get some insight on the relationships between maximum drawdown per unit of volatility #'and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. #' We have simulated cash flows over a period of 36 monthly returns and measured maximum #'drawdown for varied levels of annualised return divided by volatility varying from minus -#' two to two by step of 0.1. The process has been repeated six thousand times. +#' \emph{two to two} by step of \emph{0.1} . The process has been repeated \bold{six thousand times}. +#' @details Unfortunately, there is no \bold{analytical formulae} to establish the maximum drawdown properties under +#' the random walk assumption. We should note first that due to its definition, the maximum drawdown +#' divided by volatility is an only function of the ratio mean divided by volatility. +#' \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma = F(\mu/\sigma)} +#' Where j varies from 1 to n ,which is the number of drawdown's in simulation #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns #' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @references Maximum Loss and Maximum Drawdown in Financial Markets,\emph{International Conference Sponsored by BNP and Imperial College on: -#' Forecasting Financial Markets, London, United Kingdom, May 1997} -#' @keywords Maximum Loss Simulared Drawdown +#' Forecasting Financial Markets, London, United Kingdom, May 1997} \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} +#' @keywords Maximum Loss Simulated Drawdown #' @examples #' library(PerformanceAnalytics) #' AcarSim(edhec) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDD.Opt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDD.Opt.R 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDD.Opt.R 2013-08-24 17:52:50 UTC (rev 2874) @@ -2,7 +2,7 @@ #' #' @description A new one-parameter family of risk measures called Conditional Drawdown (CDD) has #'been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance -#' parameter, in the case of a single sample path, drawdown functional is defineed as +#' parameter, in the case of a single sample path, drawdown functional is defined as #'the mean of the worst (1 - \eqn{\alpha})% drawdowns. #'@details This section formulates a portfolio optimization problem with drawdown risk measure and suggests efficient optimization techniques for its solving. Optimal asset #' allocation considers: @@ -10,17 +10,24 @@ #' \item Generation of sample paths for the assets' rates of return. #' \item Uncompounded cumulative portfolio rate of return rather than compounded one. #' } +#' Given a sample path of instrument's rates of return (r(1),r(2)...,r(N)), +#' the CDD functional, \eqn{\delta[\alpha(w)]}, is computed by the following optimization procedure +#' \deqn{\delta[\alpha(w)] = min y + [1]/[(1-\alpha)N] \sum [z(k)]} +#' s.t. \deqn{z(k) greater than u(k)-y } +#' \deqn{u(k) greater than u(k-1)- r(k)} +#' which leads to a single optimal value of y equal to \eqn{\epsilon(\alpha)} if \eqn{\pi(\epsilon(\alpha)) > \alpha}, and to a +#' closed interval of optimal y with the left endpoint of \eqn{\epsilon(\alpha)} if \eqn{\pi(\epsilon(\alpha)) = \alpha} #' @param Ra return vector of the portfolio #' @param p confidence interval #' @author Peter Carl, Brian Peterson, Shubhankit Mohan -#' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} -#' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 +#' @references Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, Michael, \emph{Drawdown Measure in Portfolio Optimization} (June 25, 2003). Available at SSRN: \url{http://ssrn.com/abstract=544742} or \url{http://dx.doi.org/10.2139/ssrn.544742} #' @keywords Conditional Drawdown models #' @examples #' #'library(PerformanceAnalytics) #' data(edhec) #' CDDopt(edhec) +#' @seealso CDrawdown.R #' @rdname CDD.Opt #' @export Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CDrawdown.R 2013-08-24 17:52:50 UTC (rev 2874) @@ -4,17 +4,23 @@ #'been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance #' parameter, in the case of a single sample path, drawdown functional is defineed as #'the mean of the worst (1 - \eqn{\alpha})% drawdowns. -#'@details The CDD measure generalizes the notion of the drawdown functional to a multi-scenario case and can be considered as a -#'generalization of deviation measure to a dynamic case. The CDD measure includes the -#'Maximal Drawdown and Average Drawdown as its limiting cases. The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD -#' The model develops concept of drawdown measure by generalizing the notion -#' of the CDD to the case of several sample paths for portfolio uncompounded rate -#' of return. +#'@details +#'The \bold{CDD} is related to Value-at-Risk (VaR) and Conditional Value-at-Risk +#'(CVaR) measures studied by Rockafellar and Uryasev . By definition, with +#'respect to a specified probability level \eqn{\alpha}, the \bold{\eqn{\alpha}-VaR} of a portfolio is the lowest +#'amount \eqn{\epsilon} +#', \eqn{\alpha} such that, with probability \eqn{\alpha}, the loss will not exceed \eqn{\epsilon} +#', \eqn{\alpha} in a specified +#'time T, whereas the \bold{\eqn{\alpha}-CVaR} is the conditional expectation of losses above that +#'amount \eqn{\epsilon} +#'. Various issues about VaR methodology were discussed by Jorion . +#'The CDD is similar to CVaR and can be viewed as a modification of the CVaR +#'to the case when the loss-function is defined as a drawdown. CDD and CVaR are +#'conceptually related percentile-based risk performance functionals. #' @param Ra return vector of the portfolio #' @param p confidence interval #' @author Peter Carl, Brian Peterson, Shubhankit Mohan -#' @references DRAWDOWN MEASURE IN PORTFOLIO OPTIMIZATION,\emph{International Journal of Theoretical and Applied Finance} -#' ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 +#' @references Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, Michael, \emph{Drawdown Measure in Portfolio Optimization} (June 25, 2003). Available at SSRN: \url{http://ssrn.com/abstract=544742} or \url{http://dx.doi.org/10.2139/ssrn.544742} #' @keywords Conditional Drawdown models #' @examples #' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Norm.R 2013-08-24 17:52:50 UTC (rev 2874) @@ -26,7 +26,8 @@ #' @param excess for Sterling Ratio, excess amount to add to the max drawdown, #' traditionally and default .1 (10\%) #' @author Brian G. Peterson , Peter Carl , Shubhankit Mohan -#' @references Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct 2004. +#' @references Bacon, Carl, Magdon-Ismail, M. and Amir Atiya,\emph{ Maximum drawdown. Risk Magazine,} 01 Oct 2004. +#' \url{http://www.cs.rpi.edu/~magdon/talks/mdd_NYU04.pdf} #' @keywords ts multivariate distribution models #' @examples #' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/EmaxDDGBM.R 2013-08-24 17:52:50 UTC (rev 2874) @@ -2,11 +2,18 @@ #' #' @description Works on the model specified by Maddon-Ismail which investigates the behavior of this statistic for a Brownian motion #' with drift. +#' @details If X(t) is a random process on [0, T ], the maximum drawdown at time T , D(T), is defined by +#' where \deqn{D(T) = sup [X(s) - X(t)]} where s belongs to [0,t] and s belongs to [0,T] +#'Informally, this is the largest drop from a peak to a bottom. In this paper, we investigate the +#'behavior of this statistic for a Brownian motion with drift. In particular, we give an infinite +#'series representation of its distribution, and consider its expected value. When the drift is zero, +#'we give an analytic expression for the expected value, and for non-zero drift, we give an infinite +#'series representation. For all cases, we compute the limiting \bold{(\eqn{T tends to \infty})} behavior, which can be +#'logarithmic (\eqn{\mu} > 0), square root (\eqn{\mu} = 0), or linear (\eqn{\mu} < 0). #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns #' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @keywords Expected Drawdown Using Brownian Motion Assumptions -#' @references An Analysis of the maximum drawdown measure,\emph{Journal of Applied Probability} -#' (2004) +#' @references Magdon-Ismail, M., Atiya, A., Pratap, A., and Yaser S. Abu-Mostafa: On the Maximum Drawdown of a Browninan Motion, Journal of Applied Probability 41, pp. 147-161, 2004 \url{http://alumnus.caltech.edu/~amir/drawdown-jrnl.pdf} #' @keywords Drawdown models Brownian Motion Assumptions #' @examples #' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.ComparitiveReturn.GLM.R 2013-08-24 17:52:50 UTC (rev 2874) @@ -11,6 +11,8 @@ #' @param digits number of digits to round results to #' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @keywords ts unsmooth GLM return models +#' @references Okunev, John and White, Derek R., \emph{ Hedge Fund Risk Factors and Value at Risk of Credit Trading Strategies} (October 2003). +#' Available at SSRN: \url{http://ssrn.com/abstract=460641} #' @rdname table.ComparitiveReturn.GLM #' @export table.ComparitiveReturn.GLM <- Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.UnsmoothReturn.R 2013-08-24 17:52:50 UTC (rev 2874) @@ -1,17 +1,34 @@ -#' @title Compenent Decomposition of Table of Unsmooth Returns +#' @title Table of Unsmooth Returns #' #' @description Creates a table of estimates of moving averages for comparison across #' multiple instruments or funds as well as their standard error and -#' smoothing index +#' smoothing index , which is Compenent Decomposition of Table of Unsmooth Returns #' +#' @details The estimation method is based on a maximum likelihood estimation of a moving average +#' process (we use the innovations algorithm proposed by \bold{Brockwell and Davis} [1991]). The first +#' step of this approach consists in computing a series of de-meaned observed returns: +#' \deqn{X(t) = R(0,t)- \mu} +#' where \eqn{\mu} is the expected value of the series of observed returns. +#' As a consequence, the above equation can be written as : +#' \deqn{X(t)= \theta(0)\eta(t) + \theta(1)\eta(t-1) + ..... + \theta(k)\eta(t-k)} +#' with the additional assumption that : \bold{\eqn{\eta(k)= N(0,\sigma(\eta)^2)}} +#' The structure of the model and the two constraints suppose that the complete integration of +#'information in the price of the considered asset may take up to k periods because of its illiquidity. +#'In addition, according to Getmansky et al., this model is in line with previous models of nonsynchronous trading such as the one developed by \bold{Cohen, Maier, Schwartz and Whitcomb} +#' [1986]. +#' Smoothing has an impact on the third and fourth moments of the returns distribution too. #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns #' @param ci confidence interval, defaults to 95\% #' @param n number of series lags #' @param p confidence level for calculation, default p=.99 #' @param digits number of digits to round results to +#' @references Cavenaile, Laurent, Coen, Alain and Hubner, Georges,\emph{ The Impact of Illiquidity and Higher Moments of Hedge Fund Returns on Their Risk-Adjusted Performance and Diversification Potential} (October 30, 2009). Journal of Alternative Investments, Forthcoming. Available at SSRN: \url{http://ssrn.com/abstract=1502698} Working paper is at \url{http://www.hec.ulg.ac.be/sites/default/files/workingpapers/WP_HECULg_20091001_Cavenaile_Coen_Hubner.pdf} #' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @keywords ts smooth return models +#' @seealso Reutrn.Geltner Reutrn.GLM Return.Okunev +#' +#' #' @rdname table.UnsmoothReturn #' @export table.UnsmoothReturn <- Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R 2013-08-24 17:52:50 UTC (rev 2874) @@ -1,12 +1,15 @@ -#'@title Generalised Lambda Distribution Simulated Drardown -#' -#'@description To simulate net asset value (NAV) series where skewness and kurtosis are zero, +#'@title Generalised Lambda Distribution Simulated Drawdown +#'@description When selecting a hedge fund manager, one risk measure investors often +#' consider is drawdown. How should drawdown distributions look? Carr Futures' +#' Galen Burghardt, Ryan Duncan and Lianyan Liu share some insights from their +#'research to show investors how to begin to answer this tricky question +#'@details To simulate net asset value (NAV) series where skewness and kurtosis are zero, #' we draw sample returns from a lognormal return distribution. To capture skewness -#' and kurtosis, we sample returns from a generalised lambda distribution.The values of -#' skewness and excess kurtosis used were roughly consistent with the range of values we +#' and kurtosis, we sample returns from a \bold{generalised \eqn{\lambda} distribution}.The values of +#' skewness and excess kurtosis used were roughly consistent with the range of values the paper #' observed for commodity trading advisers in our database. The NAV series is constructed #' from the return series. The simulated drawdowns are then derived and used to produce -#' the theoretical drawdown distributions. A typical run usually requires 10,000 +#' the theoretical drawdown distributions. A typical run usually requires \bold{10,000} #' iterations to produce a smooth distribution. #' #' @@ -16,8 +19,10 @@ #' working paper.} #' \code{\link[stats]{}} \cr #' \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} +#' Burghardt, G., Duncan, R. and L. Liu, \eph{Deciphering drawdown}. Risk magazine, Risk management for investors, September, S16-S20, 2003. \url{http://www.risk.net/data/risk/pdf/investor/0903_risk.pdf} #' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @keywords Simulated Drawdown Using Brownian Motion Assumptions +#' @seealso Drawdowns.R #' @rdname table.normDD #' @export table.NormDD <- Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd 2013-08-24 17:52:50 UTC (rev 2874) @@ -1,6 +1,6 @@ \name{AcarSim} \alias{AcarSim} -\title{Acar and Shane Maximum Loss} +\title{Acar-Shane Maximum Loss Plot} \usage{ AcarSim(R) } @@ -15,9 +15,20 @@ simulations. We have simulated cash flows over a period of 36 monthly returns and measured maximum drawdown for varied levels of annualised return divided by volatility - varying from minus two to two by step of 0.1. The process - has been repeated six thousand times. + varying from minus \emph{two to two} by step of + \emph{0.1} . The process has been repeated \bold{six + thousand times}. } +\details{ + Unfortunately, there is no \bold{analytical formulae} to + establish the maximum drawdown properties under the + random walk assumption. We should note first that due to + its definition, the maximum drawdown divided by + volatility is an only function of the ratio mean divided + by volatility. \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma + = F(\mu/\sigma)} Where j varies from 1 to n ,which is the + number of drawdown's in simulation +} \examples{ library(PerformanceAnalytics) AcarSim(edhec) @@ -30,9 +41,10 @@ Markets,\emph{International Conference Sponsored by BNP and Imperial College on: Forecasting Financial Markets, London, United Kingdom, May 1997} + \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} } \keyword{Drawdown} \keyword{Loss} \keyword{Maximum} -\keyword{Simulared} +\keyword{Simulated} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd 2013-08-24 17:52:50 UTC (rev 2874) @@ -17,7 +17,7 @@ drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance parameter, in the case of a single sample path, drawdown - functional is defineed as the mean of the worst (1 - + functional is defined as the mean of the worst (1 - \eqn{\alpha})% drawdowns. } \details{ @@ -27,7 +27,17 @@ allocation considers: \enumerate{ \item Generation of sample paths for the assets' rates of return. \item Uncompounded cumulative portfolio rate of return rather - than compounded one. } + than compounded one. } Given a sample path of + instrument's rates of return (r(1),r(2)...,r(N)), the CDD + functional, \eqn{\delta[\alpha(w)]}, is computed by the + following optimization procedure \deqn{\delta[\alpha(w)] + = min y + [1]/[(1-\alpha)N] \sum [z(k)]} s.t. \deqn{z(k) + greater than u(k)-y } \deqn{u(k) greater than u(k-1)- + r(k)} which leads to a single optimal value of y equal to + \eqn{\epsilon(\alpha)} if \eqn{\pi(\epsilon(\alpha)) > + \alpha}, and to a closed interval of optimal y with the + left endpoint of \eqn{\epsilon(\alpha)} if + \eqn{\pi(\epsilon(\alpha)) = \alpha} } \examples{ library(PerformanceAnalytics) @@ -38,11 +48,15 @@ Peter Carl, Brian Peterson, Shubhankit Mohan } \references{ - DRAWDOWN MEASURE IN PORTFOLIO - OPTIMIZATION,\emph{International Journal of Theoretical - and Applied Finance} ,Fall 1994, 49-58.Vol. 8, No. 1 - (2005) 13-58 + Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, + Michael, \emph{Drawdown Measure in Portfolio + Optimization} (June 25, 2003). Available at SSRN: + \url{http://ssrn.com/abstract=544742} or + \url{http://dx.doi.org/10.2139/ssrn.544742} } +\seealso{ + CDrawdown.R +} \keyword{Conditional} \keyword{Drawdown} \keyword{models} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd 2013-08-24 17:52:50 UTC (rev 2874) @@ -45,8 +45,9 @@ Brian G. Peterson , Peter Carl , Shubhankit Mohan } \references{ - Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, - Maximum drawdown. Risk Magazine, 01 Oct 2004. + Bacon, Carl, Magdon-Ismail, M. and Amir Atiya,\emph{ + Maximum drawdown. Risk Magazine,} 01 Oct 2004. + \url{http://www.cs.rpi.edu/~magdon/talks/mdd_NYU04.pdf} } \keyword{distribution} \keyword{models} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd 2013-08-24 17:52:50 UTC (rev 2874) @@ -44,22 +44,22 @@ the assets' rates of return. 2) Uncompounded cumulative portfolio rate of return rather than compounded one. - The CDD measure generalizes the notion of the drawdown - functional to a multi-scenario case and can be considered - as a generalization of deviation measure to a dynamic - case. The CDD measure includes the Maximal Drawdown and - Average Drawdown as its limiting cases. The model is - focused on concept of drawdown measure which is in - possession of all properties of a deviation - measure,generalization of deviation measures to a dynamic - case.Concept of risk profiling - Mixed Conditional - Drawdown (generalization of CDD).Optimization techniques - for CDD computation - reduction to linear programming - (LP) problem. Portfolio optimization with constraint on - Mixed CDD The model develops concept of drawdown measure - by generalizing the notion of the CDD to the case of - several sample paths for portfolio uncompounded rate of - return. + The \bold{CDD} is related to Value-at-Risk (VaR) and + Conditional Value-at-Risk (CVaR) measures studied by + Rockafellar and Uryasev . By definition, with respect to + a specified probability level \eqn{\alpha}, the + \bold{\eqn{\alpha}-VaR} of a portfolio is the lowest + amount \eqn{\epsilon} , \eqn{\alpha} such that, with + probability \eqn{\alpha}, the loss will not exceed + \eqn{\epsilon} , \eqn{\alpha} in a specified time T, + whereas the \bold{\eqn{\alpha}-CVaR} is the conditional + expectation of losses above that amount \eqn{\epsilon} . + Various issues about VaR methodology were discussed by + Jorion . The CDD is similar to CVaR and can be viewed as + a modification of the CVaR to the case when the + loss-function is defined as a drawdown. CDD and CVaR are + conceptually related percentile-based risk performance + functionals. } \examples{ library(PerformanceAnalytics) @@ -80,10 +80,11 @@ and Applied Finance} ,Fall 1994, 49-58.Vol. 8, No. 1 (2005) 13-58 - DRAWDOWN MEASURE IN PORTFOLIO - OPTIMIZATION,\emph{International Journal of Theoretical - and Applied Finance} ,Fall 1994, 49-58.Vol. 8, No. 1 - (2005) 13-58 + Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, + Michael, \emph{Drawdown Measure in Portfolio + Optimization} (June 25, 2003). Available at SSRN: + \url{http://ssrn.com/abstract=544742} or + \url{http://dx.doi.org/10.2139/ssrn.544742} } \keyword{Conditional} \keyword{Drawdown} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.ComparitiveReturn.GLM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.ComparitiveReturn.GLM.Rd 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.ComparitiveReturn.GLM.Rd 2013-08-24 17:52:50 UTC (rev 2874) @@ -23,6 +23,12 @@ \author{ Peter Carl, Brian Peterson, Shubhankit Mohan } +\references{ + Okunev, John and White, Derek R., \emph{ Hedge Fund Risk + Factors and Value at Risk of Credit Trading Strategies} + (October 2003). Available at SSRN: + \url{http://ssrn.com/abstract=460641} +} \keyword{GLM} \keyword{models} \keyword{return} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.EmaxDDGBM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.EmaxDDGBM.Rd 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.EmaxDDGBM.Rd 2013-08-24 17:52:50 UTC (rev 2874) @@ -13,6 +13,23 @@ investigates the behavior of this statistic for a Brownian motion with drift. } +\details{ + If X(t) is a random process on [0, T ], the maximum + drawdown at time T , D(T), is defined by where \deqn{D(T) + = sup [X(s) - X(t)]} where s belongs to [0,t] and s + belongs to [0,T] Informally, this is the largest drop + from a peak to a bottom. In this paper, we investigate + the behavior of this statistic for a Brownian motion with + drift. In particular, we give an infinite series + representation of its distribution, and consider its + expected value. When the drift is zero, we give an + analytic expression for the expected value, and for + non-zero drift, we give an infinite series + representation. For all cases, we compute the limiting + \bold{(\eqn{T tends to \infty})} behavior, which can be + logarithmic (\eqn{\mu} > 0), square root (\eqn{\mu} = 0), + or linear (\eqn{\mu} < 0). +} \examples{ library(PerformanceAnalytics) data(edhec) @@ -22,8 +39,11 @@ Peter Carl, Brian Peterson, Shubhankit Mohan } \references{ - An Analysis of the maximum drawdown measure,\emph{Journal - of Applied Probability} (2004) + Magdon-Ismail, M., Atiya, A., Pratap, A., and Yaser S. + Abu-Mostafa: On the Maximum Drawdown of a Browninan + Motion, Journal of Applied Probability 41, pp. 147-161, + 2004 + \url{http://alumnus.caltech.edu/~amir/drawdown-jrnl.pdf} } \keyword{Assumptions} \keyword{Brownian} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.NormDD.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.NormDD.Rd 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.NormDD.Rd 2013-08-24 17:52:50 UTC (rev 2874) @@ -1,6 +1,6 @@ \name{table.NormDD} \alias{table.NormDD} -\title{Generalised Lambda Distribution Simulated Drardown} +\title{Generalised Lambda Distribution Simulated Drawdown} \usage{ table.NormDD(R, digits = 4) } @@ -9,17 +9,26 @@ or zoo object of asset returns} } \description{ + When selecting a hedge fund manager, one risk measure + investors often consider is drawdown. How should drawdown + distributions look? Carr Futures' Galen Burghardt, Ryan + Duncan and Lianyan Liu share some insights from their + research to show investors how to begin to answer this + tricky question +} +\details{ To simulate net asset value (NAV) series where skewness and kurtosis are zero, we draw sample returns from a lognormal return distribution. To capture skewness and - kurtosis, we sample returns from a generalised lambda - distribution.The values of skewness and excess kurtosis - used were roughly consistent with the range of values we - observed for commodity trading advisers in our database. - The NAV series is constructed from the return series. The - simulated drawdowns are then derived and used to produce - the theoretical drawdown distributions. A typical run - usually requires 10,000 iterations to produce a smooth + kurtosis, we sample returns from a \bold{generalised + \eqn{\lambda} distribution}.The values of skewness and + excess kurtosis used were roughly consistent with the + range of values the paper observed for commodity trading + advisers in our database. The NAV series is constructed + from the return series. The simulated drawdowns are then + derived and used to produce the theoretical drawdown + distributions. A typical run usually requires + \bold{10,000} iterations to produce a smooth distribution. } \author{ @@ -30,7 +39,14 @@ Autocorrelation, Stupid (November 2012) Newedge working paper.} \code{\link[stats]{}} \cr \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} + Burghardt, G., Duncan, R. and L. Liu, \eph{Deciphering + drawdown}. Risk magazine, Risk management for investors, + September, S16-S20, 2003. + \url{http://www.risk.net/data/risk/pdf/investor/0903_risk.pdf} } +\seealso{ + Drawdowns.R +} \keyword{Assumptions} \keyword{Brownian} \keyword{Drawdown} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.UnsmoothReturn.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.UnsmoothReturn.Rd 2013-08-24 17:48:09 UTC (rev 2873) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.UnsmoothReturn.Rd 2013-08-24 17:52:50 UTC (rev 2874) @@ -1,6 +1,6 @@ \name{table.UnsmoothReturn} \alias{table.UnsmoothReturn} -\title{Compenent Decomposition of Table of Unsmooth Returns} +\title{Table of Unsmooth Returns} \usage{ table.UnsmoothReturn(R, n = 3, p = 0.95, digits = 4) } @@ -19,11 +19,47 @@ \description{ Creates a table of estimates of moving averages for comparison across multiple instruments or funds as well - as their standard error and smoothing index + as their standard error and smoothing index , which is + Compenent Decomposition of Table of Unsmooth Returns } +\details{ + The estimation method is based on a maximum likelihood + estimation of a moving average process (we use the + innovations algorithm proposed by \bold{Brockwell and + Davis} [1991]). The first step of this approach consists + in computing a series of de-meaned observed returns: + \deqn{X(t) = R(0,t)- \mu} where \eqn{\mu} is the expected + value of the series of observed returns. As a + consequence, the above equation can be written as : + \deqn{X(t)= \theta(0)\eta(t) + \theta(1)\eta(t-1) + ..... + + \theta(k)\eta(t-k)} with the additional assumption that + : \bold{\eqn{\eta(k)= N(0,\sigma(\eta)^2)}} The structure + of the model and the two constraints suppose that the + complete integration of information in the price of the + considered asset may take up to k periods because of its + illiquidity. In addition, according to Getmansky et al., + this model is in line with previous models of + nonsynchronous trading such as the one developed by + \bold{Cohen, Maier, Schwartz and Whitcomb} [1986]. + Smoothing has an impact on the third and fourth moments + of the returns distribution too. +} \author{ Peter Carl, Brian Peterson, Shubhankit Mohan } +\references{ + Cavenaile, Laurent, Coen, Alain and Hubner, + Georges,\emph{ The Impact of Illiquidity and Higher + Moments of Hedge Fund Returns on Their Risk-Adjusted + Performance and Diversification Potential} (October 30, + 2009). Journal of Alternative Investments, Forthcoming. + Available at SSRN: \url{http://ssrn.com/abstract=1502698} + Working paper is at + \url{http://www.hec.ulg.ac.be/sites/default/files/workingpapers/WP_HECULg_20091001_Cavenaile_Coen_Hubner.pdf} +} +\seealso{ + Reutrn.Geltner Reutrn.GLM Return.Okunev +} \keyword{models} \keyword{return} \keyword{smooth} From noreply at r-forge.r-project.org Sat Aug 24 21:38:41 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 21:38:41 +0200 (CEST) Subject: [Returnanalytics-commits] r2875 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130824193841.382C3185C4A@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-24 21:38:40 +0200 (Sat, 24 Aug 2013) New Revision: 2875 Added: pkg/PortfolioAnalytics/man/extractEfficientFrontier.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/R/extract.efficient.frontier.R Log: Adding function to extract efficient frontier from objects created by optimize.portfolio. Modifying chart.Weights.EF to work better with efficient.frontier objects. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-24 17:52:50 UTC (rev 2874) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-24 19:38:40 UTC (rev 2875) @@ -45,6 +45,7 @@ export(diversification_constraint) export(diversification) export(extract.efficient.frontier) +export(extractEfficientFrontier) export(extractObjectiveMeasures) export(extractStats.optimize.portfolio.DEoptim) export(extractStats.optimize.portfolio.GenSA) Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-24 17:52:50 UTC (rev 2874) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-24 19:38:40 UTC (rev 2875) @@ -198,8 +198,16 @@ # using ideas from weightsPlot.R in fPortfolio package if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") - frontier <- object$frontier + if(is.list(object)){ + # Objects created with create.EfficientFrontier will be a list of 2 elements + frontier <- object$frontier + } else { + # Objects created with extractEfficientFrontier will only be an efficient.frontier object + frontier <- object + } + + # get the columns with weights cnames <- colnames(frontier) wts_idx <- grep(pattern="^w\\.", cnames) @@ -227,7 +235,8 @@ # plot the positive weights barplot(t(pos.weights), col = colorset, space = 0, ylab = "", xlim = c(xmin, xmax), ylim = c(ymin, ymax), - border = element.color, cex.axis=cex.axis, ...) + border = element.color, cex.axis=cex.axis, + axisnames=FALSE,...) # set the legend information if(is.null(legend.labels)){ @@ -236,7 +245,8 @@ legend("topright", legend = legend.labels, bty = "n", cex = cex.legend, fill = colorset) # plot the negative weights - barplot(t(neg.weights), col = colorset, space = 0, add = TRUE, border = element.color, cex.axis=cex.axis, axes=FALSE, ...) + barplot(t(neg.weights), col = colorset, space = 0, add = TRUE, border = element.color, + cex.axis=cex.axis, axes=FALSE, axisnames=FALSE, ...) # return along the efficient frontier # get the "mean" column Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-24 17:52:50 UTC (rev 2874) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-24 19:38:40 UTC (rev 2875) @@ -331,3 +331,66 @@ R=R), class="efficient.frontier")) } +#' Extract the efficient frontier data points +#' +#' This function extracts the efficient frontier from an object created by +#' \code{\link{optimize.portfolio}}. +#' +#' If the object is an \code{optimize.portfolio.ROI} object and \code{match.col} +#' is "ES", "ETL", or "CVaR", then the mean-ETL efficient frontier will be +#' created via \code{meanetl.efficient.frontier}. +#' +#' If the object is an \code{optimize.portfolio.ROI} object and \code{match.col} +#' is "var", then the mean-var efficient frontier will be created via +#' \code{meanvar.efficient.frontier}. +#' +#' For objects created by \code{optimize.portfolo} with the DEoptim, random, or +#' pso solvers, the efficient frontier will be extracted from the object via +#' \code{extract.efficient.frontier}. This means that \code{optimize.portfolio} must +#' be run with \code{trace=TRUE} +#' +#' @param object an optimal portfolio object created by \code{optimize.portfolio} +#' @param match.col string name of column to use for risk (horizontal axis). +#' \code{match.col} must match the name of an objective in the \code{portfolio} +#' object. +#' @param n.portfolios number of portfolios to use to plot the efficient frontier +#' @return an \code{efficient.frontier} object with weights and other metrics along the efficient frontier +#' @author Ross Bennett +#' @export +extractEfficientFrontier <- function(object, match.col="ES", n.portfolios=25){ + # extract the efficient frontier from an optimize.portfolio output object + + if(!inherits(object, "optimize.portfolio")) stop("object must be of class 'optimize.portfolio'") + + if(inherits(object, "optimize.portfolio.GenSA")){ + stop("GenSA does not return any useable trace information for portfolios tested, thus we cannot extract an efficient frontier.") + } + + # get the portfolio and returns + portf <- object$portfolio + R <- object$R + if(is.null(R)) stop(paste("Not able to get asset returns from", object, "run optimize.portfolio with trace=TRUE")) + + # get the objective names and check if match.col is an objective name + objnames <- unlist(lapply(portf$objectives, function(x) x$name)) + if(!(match.col %in% objnames)){ + stop("match.col must match an objective name") + } + + # We need to create the efficient frontier if object is of class optimize.portfolio.ROI + if(inherits(object, "optimize.portfolio.ROI")){ + if(match.col %in% c("ETL", "ES", "CVaR")){ + frontier <- meanetl.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) + } + if(match.col == "var"){ + frontier <- meanvar.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) + } + } # end optimize.portfolio.ROI + + # use extract.efficient.frontier for otpimize.portfolio output objects with global solvers + if(inherits(object, c("optimize.portfolio.random", "optimize.portfolio.DEoptim", "optimize.portfolio.pso"))){ + frontier <- extract.efficient.frontier(object=object, match.col=match.col, n.portfolios=n.portfolios) + } + return(frontier) +} + Added: pkg/PortfolioAnalytics/man/extractEfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/extractEfficientFrontier.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/extractEfficientFrontier.Rd 2013-08-24 19:38:40 UTC (rev 2875) @@ -0,0 +1,48 @@ +\name{extractEfficientFrontier} +\alias{extractEfficientFrontier} +\title{Extract the efficient frontier data points} +\usage{ + extractEfficientFrontier(object, match.col = "ES", + n.portfolios = 25) +} +\arguments{ + \item{object}{an optimal portfolio object created by + \code{optimize.portfolio}} + + \item{match.col}{string name of column to use for risk + (horizontal axis). \code{match.col} must match the name + of an objective in the \code{portfolio} object.} + + \item{n.portfolios}{number of portfolios to use to plot + the efficient frontier} +} +\value{ + an \code{efficient.frontier} object with weights and + other metrics along the efficient frontier +} +\description{ + This function extracts the efficient frontier from an + object created by \code{\link{optimize.portfolio}}. +} +\details{ + If the object is an \code{optimize.portfolio.ROI} object + and \code{match.col} is "ES", "ETL", or "CVaR", then the + mean-ETL efficient frontier will be created via + \code{meanetl.efficient.frontier}. + + If the object is an \code{optimize.portfolio.ROI} object + and \code{match.col} is "var", then the mean-var + efficient frontier will be created via + \code{meanvar.efficient.frontier}. + + For objects created by \code{optimize.portfolo} with the + DEoptim, random, or pso solvers, the efficient frontier + will be extracted from the object via + \code{extract.efficient.frontier}. This means that + \code{optimize.portfolio} must be run with + \code{trace=TRUE} +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Sat Aug 24 22:12:25 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 22:12:25 +0200 (CEST) Subject: [Returnanalytics-commits] r2876 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130824201225.DE05F1854E7@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-24 22:12:25 +0200 (Sat, 24 Aug 2013) New Revision: 2876 Removed: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.efficient.frontier.R Log: Making chart.Weights.EF a generic method to work on efficient.frontier and optimize.portfolio objects. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-24 19:38:40 UTC (rev 2875) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-24 20:12:25 UTC (rev 2876) @@ -19,6 +19,8 @@ export(chart.Scatter.ROI) export(chart.Scatter.RP) export(chart.Weights.DE) +export(chart.Weights.EF.efficient.frontier) +export(chart.Weights.EF.optimize.portfolio) export(chart.Weights.EF) export(chart.Weights.GenSA) export(chart.Weights.optimize.portfolio.DEoptim) Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-24 19:38:40 UTC (rev 2875) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-24 20:12:25 UTC (rev 2876) @@ -184,6 +184,8 @@ #' @param object object of class 'efficient.frontier' created by \code{\link{create.EfficientFrontier}}. #' @param colorset color palette to use. #' @param ... passthrough parameters to \code{barplot}. +#' @param n.portfolios number of portfolios to extract along the efficient frontier. +#' This is only used for objects of class \code{optimize.portfolio} #' @param match.col match.col string name of column to use for risk (horizontal axis). #' Must match the name of an objective. #' @param main main title used in the plot. @@ -194,7 +196,13 @@ #' @param element.color provides the color for drawing less-important chart elements, such as the box lines, axis lines, etc. #' @author Ross Bennett #' @export -chart.Weights.EF <- function(object, colorset=NULL, ..., match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray"){ +chart.Weights.EF <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray"){ +UseMethod("chart.Weights.EF") +} + +#' @rdname chart.Weights.EF +#' @export +chart.Weights.EF.efficient.frontier <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray"){ # using ideas from weightsPlot.R in fPortfolio package if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") @@ -283,6 +291,20 @@ box(col=element.color) } +#' @rdname chart.Weights.EF +#' @export +chart.Weights.EF.optimize.portfolio <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray"){ + # chart the weights along the efficient frontier of an objected created by optimize.portfolio + + if(!inherits(object, "optimize.portfolio")) stop("object must be of class optimize.portfolio") + + frontier <- extractEfficientFrontier(object=object, match.col=match.col, n.portfolios=n.portfolios) + PortfolioAnalytics:::chart.Weights.EF(object=frontier, colorset=colorset, ..., + match.col=match.col, main=main, cex.lab=cex.lab, + cex.axis=cex.axis, cex.legend=cex.legend, + legend.labels=legend.labels, element.color=element.color) +} + #' @rdname chart.EfficientFrontier #' @export chart.EfficientFrontier.efficient.frontier <- function(object, chart.assets=TRUE, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ Deleted: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-24 19:38:40 UTC (rev 2875) +++ pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-24 20:12:25 UTC (rev 2876) @@ -1,49 +0,0 @@ -\name{chart.Weights.EF} -\alias{chart.Weights.EF} -\title{chart the weights along the efficient frontier} -\usage{ - chart.Weights.EF(object, colorset = NULL, ..., - match.col = "ES", main = "EF Weights", cex.lab = 0.8, - cex.axis = 0.8, cex.legend = 0.8, legend.labels = NULL, - element.color = "darkgray") -} -\arguments{ - \item{object}{object of class 'efficient.frontier' - created by \code{\link{create.EfficientFrontier}}.} - - \item{colorset}{color palette to use.} - - \item{...}{passthrough parameters to \code{barplot}.} - - \item{match.col}{match.col string name of column to use - for risk (horizontal axis). Must match the name of an - objective.} - - \item{main}{main title used in the plot.} - - \item{cex.lab}{The magnification to be used for x- and - y-axis labels relative to the current setting of 'cex'.} - - \item{cex.axis}{The magnification to be used for sizing - the axis text relative to the current setting of 'cex', - similar to \code{\link{plot}}.} - - \item{cex.legend}{The magnification to be used for sizing - the legend relative to the current setting of 'cex', - similar to \code{\link{plot}}.} - - \item{legend.labels}{character vector to use for the - legend labels} - - \item{element.color}{provides the color for drawing - less-important chart elements, such as the box lines, - axis lines, etc.} -} -\description{ - This creates a stacked column chart of the weights of - portfolios along the efficient frontier. -} -\author{ - Ross Bennett -} - From noreply at r-forge.r-project.org Sat Aug 24 22:16:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 22:16:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2877 - in pkg/PortfolioAnalytics: R man Message-ID: <20130824201635.E3D8D183AD8@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-24 22:16:35 +0200 (Sat, 24 Aug 2013) New Revision: 2877 Added: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R Log: Cleaning up documentation for chart.Weights.EF Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-24 20:12:25 UTC (rev 2876) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-24 20:16:35 UTC (rev 2877) @@ -181,7 +181,7 @@ #' #' This creates a stacked column chart of the weights of portfolios along the efficient frontier. #' -#' @param object object of class 'efficient.frontier' created by \code{\link{create.EfficientFrontier}}. +#' @param object object of class \code{efficient.frontier} or \code{optimize.portfolio}. #' @param colorset color palette to use. #' @param ... passthrough parameters to \code{barplot}. #' @param n.portfolios number of portfolios to extract along the efficient frontier. @@ -189,7 +189,7 @@ #' @param match.col match.col string name of column to use for risk (horizontal axis). #' Must match the name of an objective. #' @param main main title used in the plot. -#' @param cex.lab The magnification to be used for x- and y-axis labels relative to the current setting of 'cex'. +#' @param cex.lab The magnification to be used for x-axis and y-axis labels relative to the current setting of 'cex'. #' @param cex.axis The magnification to be used for sizing the axis text relative to the current setting of 'cex', similar to \code{\link{plot}}. #' @param cex.legend The magnification to be used for sizing the legend relative to the current setting of 'cex', similar to \code{\link{plot}}. #' @param legend.labels character vector to use for the legend labels Added: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-24 20:16:35 UTC (rev 2877) @@ -0,0 +1,69 @@ +\name{chart.Weights.EF} +\alias{chart.Weights.EF} +\alias{chart.Weights.EF.efficient.frontier} +\alias{chart.Weights.EF.optimize.portfolio} +\title{chart the weights along the efficient frontier} +\usage{ + chart.Weights.EF(object, colorset = NULL, ..., + n.portfolios = 25, match.col = "ES", + main = "EF Weights", cex.lab = 0.8, cex.axis = 0.8, + cex.legend = 0.8, legend.labels = NULL, + element.color = "darkgray") + + chart.Weights.EF.efficient.frontier(object, + colorset = NULL, ..., n.portfolios = 25, + match.col = "ES", main = "EF Weights", cex.lab = 0.8, + cex.axis = 0.8, cex.legend = 0.8, legend.labels = NULL, + element.color = "darkgray") + + chart.Weights.EF.optimize.portfolio(object, + colorset = NULL, ..., n.portfolios = 25, + match.col = "ES", main = "EF Weights", cex.lab = 0.8, + cex.axis = 0.8, cex.legend = 0.8, legend.labels = NULL, + element.color = "darkgray") +} +\arguments{ + \item{object}{object of class \code{efficient.frontier} + or \code{optimize.portfolio}.} + + \item{colorset}{color palette to use.} + + \item{...}{passthrough parameters to \code{barplot}.} + + \item{n.portfolios}{number of portfolios to extract along + the efficient frontier. This is only used for objects of + class \code{optimize.portfolio}} + + \item{match.col}{match.col string name of column to use + for risk (horizontal axis). Must match the name of an + objective.} + + \item{main}{main title used in the plot.} + + \item{cex.lab}{The magnification to be used for x-axis + and y-axis labels relative to the current setting of + 'cex'.} + + \item{cex.axis}{The magnification to be used for sizing + the axis text relative to the current setting of 'cex', + similar to \code{\link{plot}}.} + + \item{cex.legend}{The magnification to be used for sizing + the legend relative to the current setting of 'cex', + similar to \code{\link{plot}}.} + + \item{legend.labels}{character vector to use for the + legend labels} + + \item{element.color}{provides the color for drawing + less-important chart elements, such as the box lines, + axis lines, etc.} +} +\description{ + This creates a stacked column chart of the weights of + portfolios along the efficient frontier. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Sat Aug 24 22:33:49 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 24 Aug 2013 22:33:49 +0200 (CEST) Subject: [Returnanalytics-commits] r2878 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130824203350.00ECE183EBB@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-24 22:33:49 +0200 (Sat, 24 Aug 2013) New Revision: 2878 Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R Log: Adding a couple examples to testing_efficient_frontier Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-24 20:16:35 UTC (rev 2877) +++ pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-24 20:33:49 UTC (rev 2878) @@ -14,6 +14,8 @@ data(edhec) R <- edhec[, 1:5] +# change the column names for better legends in plotting +colnames(R) <- c("CA", "CTAG", "DS", "EM", "EQM") funds <- colnames(R) # initial portfolio object @@ -41,10 +43,16 @@ # run optimize.portfolio and chart the efficient frontier for that object opt_meanvar <- optimize.portfolio(R=R, portfolio=meanvar.portf, optimize_method="ROI", trace=TRUE) chart.EfficientFrontier(opt_meanvar, match.col="var", n.portfolios=50) +# The weights along the efficient frontier can be plotted by passing in the +# optimize.portfolio output object +chart.Weights.EF(opt_meanvar, match.col="var") +# or we can extract the efficient frontier and then plot it +ef <- extractEfficientFrontier(object=opt_meanvar, match.col="var", n.portfolios=15) +chart.Weights.EF(ef, match.col="var", colorset=bluemono) # mean-etl efficient frontier meanetl.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="mean-etl") -chart.EfficientFrontier(meanetl.ef, match.col="ES", main="mean-ETL Efficient Frontier", type="b", col="blue") +chart.EfficientFrontier(meanetl.ef, match.col="ES", main="mean-ETL Efficient Frontier", type="l", col="blue") chart.Weights.EF(meanetl.ef, colorset=bluemono, match.col="ES") # mean-etl efficient frontier using random portfolios From noreply at r-forge.r-project.org Sun Aug 25 01:07:08 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 25 Aug 2013 01:07:08 +0200 (CEST) Subject: [Returnanalytics-commits] r2879 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man Message-ID: <20130824230709.07DC1184C81@r-forge.r-project.org> Author: pulkit Date: 2013-08-25 01:07:08 +0200 (Sun, 25 Aug 2013) New Revision: 2879 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd Log: some changes Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-24 20:33:49 UTC (rev 2878) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-24 23:07:08 UTC (rev 2879) @@ -1,5 +1,6 @@ export(AlphaDrawdown) export(BenchmarkSR) +export(BetaDrawdown) export(CDaR) export(CdarMultiPath) export(chart.BenchmarkSR) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-24 20:33:49 UTC (rev 2878) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-24 23:07:08 UTC (rev 2879) @@ -51,9 +51,10 @@ #'of Florida,September 2012. #' #'@examples +#'data(edhec) #'BetaDrawdown(edhec[,1],edhec[,2]) #' -#' +#'@export BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ # DESCRIPTION: Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-24 20:33:49 UTC (rev 2878) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-24 23:07:08 UTC (rev 2879) @@ -31,7 +31,7 @@ #' #' # with S&P 500 data and T-bill data #' -#'dt<-read.zoo("returns.csv",sep=",",header = TRUE) +#'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) #'dt<-as.xts(dt) #'EDDCOPS(dt[,1],delta = 0.33,gamma = 0.7,Rf = (1+dt[,2])^(1/12)-1,geometric = TRUE) #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-24 20:33:49 UTC (rev 2878) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-24 23:07:08 UTC (rev 2879) @@ -8,7 +8,8 @@ #' #' Modified Generalized Pareto Distribution is given by the following formula #' -#' \dqeqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} +#' \deqn{ +#' G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} #' #' Here \eqn{\gamma{\epsilon}R} is the modifying parameter. When \eqn{\gamma<1} the corresponding densities are #' strictly decreasing with heavier tail; the GDP is recovered by setting \eqn{\gamma = 1} .\eqn{\gamma \textgreater 1} @@ -30,16 +31,11 @@ #' @param threshold The threshold beyond which the drawdowns have to be modelled #' #' -#'@examples -#' -#'DrawdownGPD(edhec[,1],"gpd",0.95) -#' -#'DrawdownGPD(edhec[,1],"weibull") -#' #'@references #'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). #'Coppead Working Paper Series No. 359.Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. #' +#'@export DrawdownGPD<-function(R,type=c("gpd","weibull"),threshold=0.90){ x = checkData(R) columns = ncol(R) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-24 20:33:49 UTC (rev 2878) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-24 23:07:08 UTC (rev 2879) @@ -40,7 +40,7 @@ #' #' # with S&P 500 data and T-bill data #' -#'dt<-read.zoo("returns.csv",sep=",",header = TRUE) +#'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) #'dt<-as.xts(dt) #'REDDCOPS(dt[,1],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "one") #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-24 20:33:49 UTC (rev 2878) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-24 23:07:08 UTC (rev 2879) @@ -63,6 +63,7 @@ the market performs well. } \examples{ +data(edhec) BetaDrawdown(edhec[,1],edhec[,2]) } \author{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-24 20:33:49 UTC (rev 2878) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-24 23:07:08 UTC (rev 2879) @@ -38,7 +38,7 @@ \examples{ # with S&P 500 data and T-bill data -dt<-read.zoo("returns.csv",sep=",",header = TRUE) +dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) dt<-as.xts(dt) EDDCOPS(dt[,1],delta = 0.33,gamma = 0.7,Rf = (1+dt[,2])^(1/12)-1,geometric = TRUE) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-24 20:33:49 UTC (rev 2878) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-24 23:07:08 UTC (rev 2879) @@ -55,7 +55,7 @@ \examples{ # with S&P 500 data and T-bill data -dt<-read.zoo("returns.csv",sep=",",header = TRUE) +dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) dt<-as.xts(dt) REDDCOPS(dt[,1],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "one") From noreply at r-forge.r-project.org Sun Aug 25 12:42:42 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 25 Aug 2013 12:42:42 +0200 (CEST) Subject: [Returnanalytics-commits] r2880 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . man Message-ID: <20130825104242.99F4A183913@r-forge.r-project.org> Author: braverock Date: 2013-08-25 12:42:42 +0200 (Sun, 25 Aug 2013) New Revision: 2880 Removed: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Normalized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.NormDD.Rd Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/SterlingRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.Autocorrelation.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.ComparitiveReturn.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.UnsmoothReturn.Rd Log: - update roxygen docs - remove old different-case files for several functions Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,38 +1,38 @@ -Package: noniid.sm -Type: Package -Title: Non-i.i.d. GSoC 2013 Shubhankit -Version: 0.1 -Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ -Author: Shubhankit Mohan -Contributors: Peter Carl, Brian G. Peterson -Depends: - xts, - PerformanceAnalytics -Suggests: - PortfolioAnalytics -Maintainer: Brian G. Peterson -Description: GSoC 2013 project to replicate literature on drawdowns and - non-i.i.d assumptions in finance. -License: GPL-3 -ByteCompile: TRUE -Collate: - 'ACStdDev.annualized.R' - 'CalmarRatio.Normalized.R' - 'CDDopt.R' - 'CDrawdown.R' - 'chart.Autocorrelation.R' - 'EmaxDDGBM.R' - 'GLMSmoothIndex.R' - 'maxDDGBM.R' - 'na.skip.R' - 'Return.GLM.R' - 'table.ComparitiveReturn.GLM.R' - 'table.normDD.R' - 'table.UnsmoothReturn.R' - 'UnsmoothReturn.R' - 'AcarSim.R' - 'CDD.Opt.R' - 'CalmarRatio.Norm.R' - 'SterlingRatio.Norm.R' - 'LoSharpe.R' - 'Return.Okunev.R' +Package: noniid.sm +Type: Package +Title: Non-i.i.d. GSoC 2013 Shubhankit +Version: 0.1 +Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ +Author: Shubhankit Mohan +Contributors: Peter Carl, Brian G. Peterson +Depends: + xts, + PerformanceAnalytics +Suggests: + PortfolioAnalytics +Maintainer: Brian G. Peterson +Description: GSoC 2013 project to replicate literature on drawdowns and + non-i.i.d assumptions in finance. +License: GPL-3 +ByteCompile: TRUE +Collate: + 'ACStdDev.annualized.R' + 'CalmarRatio.Normalized.R' + 'CDDopt.R' + 'CDrawdown.R' + 'chart.Autocorrelation.R' + 'EmaxDDGBM.R' + 'GLMSmoothIndex.R' + 'maxDDGBM.R' + 'na.skip.R' + 'Return.GLM.R' + 'table.ComparitiveReturn.GLM.R' + 'table.normDD.R' + 'table.UnsmoothReturn.R' + 'UnsmoothReturn.R' + 'AcarSim.R' + 'CDD.Opt.R' + 'CalmarRatio.Norm.R' + 'SterlingRatio.Norm.R' + 'LoSharpe.R' + 'Return.Okunev.R' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/ACStdDev.annualized.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,52 +1,52 @@ -\name{ACStdDev.annualized} -\alias{ACStdDev.annualized} -\alias{sd.annualized} -\alias{sd.multiperiod} -\alias{StdDev.annualized} -\title{Autocorrleation adjusted Standard Deviation} -\usage{ - ACsd.annualized(edhec,3) -} -\arguments{ - \item{x}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{lag}{: number of autocorrelated lag factors - inputted by user} - - \item{scale}{number of periods in a year (daily scale = - 252, monthly scale = 12, quarterly scale = 4)} - - \item{\dots}{any other passthru parameters} -} -\description{ - Incorporating the component of lagged autocorrelation - factor into adjusted time scale standard deviation - translation -} -\details{ - Given a sample of historical returns R(1),R(2), . . - .,R(T),the method assumes the fund manager smooths - returns in the following manner, when 't' is the unit - time interval: The square root time translation can be - defined as : \deqn{ \sigma(T) = T \sqrt\sigma(t)} -} -\author{ - Peter Carl,Brian Peterson, Shubhankit Mohan -} -\references{ - Burghardt, G., and L. Liu, \emph{ It's the - Autocorrelation, Stupid (November 2012) Newedge working - paper.} \code{\link[stats]{}} \cr - \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} -} -\seealso{ - \code{\link[stats]{sd}} \cr - \code{\link[stats]{stdDev.annualized}} \cr - \url{http://en.wikipedia.org/wiki/Volatility_(finance)} -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - +\name{ACStdDev.annualized} +\alias{ACStdDev.annualized} +\alias{sd.annualized} +\alias{sd.multiperiod} +\alias{StdDev.annualized} +\title{Autocorrleation adjusted Standard Deviation} +\usage{ + ACsd.annualized(edhec,3) +} +\arguments{ + \item{x}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{lag}{: number of autocorrelated lag factors + inputted by user} + + \item{scale}{number of periods in a year (daily scale = + 252, monthly scale = 12, quarterly scale = 4)} + + \item{\dots}{any other passthru parameters} +} +\description{ + Incorporating the component of lagged autocorrelation + factor into adjusted time scale standard deviation + translation +} +\details{ + Given a sample of historical returns R(1),R(2), . . + .,R(T),the method assumes the fund manager smooths + returns in the following manner, when 't' is the unit + time interval: The square root time translation can be + defined as : \deqn{ \sigma(T) = T \sqrt\sigma(t)} +} +\author{ + Peter Carl,Brian Peterson, Shubhankit Mohan +} +\references{ + Burghardt, G., and L. Liu, \emph{ It's the + Autocorrelation, Stupid (November 2012) Newedge working + paper.} \code{\link[stats]{}} \cr + \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} +} +\seealso{ + \code{\link[stats]{sd}} \cr + \code{\link[stats]{stdDev.annualized}} \cr + \url{http://en.wikipedia.org/wiki/Volatility_(finance)} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,50 +1,50 @@ -\name{AcarSim} -\alias{AcarSim} -\title{Acar-Shane Maximum Loss Plot} -\usage{ - AcarSim(R) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} -} -\description{ - To get some insight on the relationships between maximum - drawdown per unit of volatility and mean return divided - by volatility, we have proceeded to Monte-Carlo - simulations. We have simulated cash flows over a period - of 36 monthly returns and measured maximum drawdown for - varied levels of annualised return divided by volatility - varying from minus \emph{two to two} by step of - \emph{0.1} . The process has been repeated \bold{six - thousand times}. -} -\details{ - Unfortunately, there is no \bold{analytical formulae} to - establish the maximum drawdown properties under the - random walk assumption. We should note first that due to - its definition, the maximum drawdown divided by - volatility is an only function of the ratio mean divided - by volatility. \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma - = F(\mu/\sigma)} Where j varies from 1 to n ,which is the - number of drawdown's in simulation -} -\examples{ -library(PerformanceAnalytics) -AcarSim(edhec) -} -\author{ - Peter Carl, Brian Peterson, Shubhankit Mohan -} -\references{ - Maximum Loss and Maximum Drawdown in Financial - Markets,\emph{International Conference Sponsored by BNP - and Imperial College on: Forecasting Financial Markets, - London, United Kingdom, May 1997} - \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} -} -\keyword{Drawdown} -\keyword{Loss} -\keyword{Maximum} -\keyword{Simulated} - +\name{AcarSim} +\alias{AcarSim} +\title{Acar-Shane Maximum Loss Plot} +\usage{ + AcarSim(R) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} +} +\description{ + To get some insight on the relationships between maximum + drawdown per unit of volatility and mean return divided + by volatility, we have proceeded to Monte-Carlo + simulations. We have simulated cash flows over a period + of 36 monthly returns and measured maximum drawdown for + varied levels of annualised return divided by volatility + varying from minus \emph{two to two} by step of + \emph{0.1} . The process has been repeated \bold{six + thousand times}. +} +\details{ + Unfortunately, there is no \bold{analytical formulae} to + establish the maximum drawdown properties under the + random walk assumption. We should note first that due to + its definition, the maximum drawdown divided by + volatility is an only function of the ratio mean divided + by volatility. \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma + = F(\mu/\sigma)} Where j varies from 1 to n ,which is the + number of drawdown's in simulation +} +\examples{ +library(PerformanceAnalytics) +AcarSim(edhec) +} +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan +} +\references{ + Maximum Loss and Maximum Drawdown in Financial + Markets,\emph{International Conference Sponsored by BNP + and Imperial College on: Forecasting Financial Markets, + London, United Kingdom, May 1997} + \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} +} +\keyword{Drawdown} +\keyword{Loss} +\keyword{Maximum} +\keyword{Simulated} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CDD.Opt.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,63 +1,63 @@ -\name{CDD.Opt} -\alias{CDD.Opt} -\title{Chekhlov Conditional Drawdown at Risk Optimization} -\usage{ - CDD.Opt(rmat, alpha = 0.05, rmin = 0, wmin = 0, wmax = 1, - weight.sum = 1) -} -\arguments{ - \item{Ra}{return vector of the portfolio} - - \item{p}{confidence interval} -} -\description{ - A new one-parameter family of risk measures called - Conditional Drawdown (CDD) has been proposed. These - measures of risk are functionals of the portfolio - drawdown (underwater) curve considered in active - portfolio management. For some value of the tolerance - parameter, in the case of a single sample path, drawdown - functional is defined as the mean of the worst (1 - - \eqn{\alpha})% drawdowns. -} -\details{ - This section formulates a portfolio optimization problem - with drawdown risk measure and suggests efficient - optimization techniques for its solving. Optimal asset - allocation considers: \enumerate{ \item Generation of - sample paths for the assets' rates of return. \item - Uncompounded cumulative portfolio rate of return rather - than compounded one. } Given a sample path of - instrument's rates of return (r(1),r(2)...,r(N)), the CDD - functional, \eqn{\delta[\alpha(w)]}, is computed by the - following optimization procedure \deqn{\delta[\alpha(w)] - = min y + [1]/[(1-\alpha)N] \sum [z(k)]} s.t. \deqn{z(k) - greater than u(k)-y } \deqn{u(k) greater than u(k-1)- - r(k)} which leads to a single optimal value of y equal to - \eqn{\epsilon(\alpha)} if \eqn{\pi(\epsilon(\alpha)) > - \alpha}, and to a closed interval of optimal y with the - left endpoint of \eqn{\epsilon(\alpha)} if - \eqn{\pi(\epsilon(\alpha)) = \alpha} -} -\examples{ -library(PerformanceAnalytics) -data(edhec) -CDDopt(edhec) -} -\author{ - Peter Carl, Brian Peterson, Shubhankit Mohan -} -\references{ - Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, - Michael, \emph{Drawdown Measure in Portfolio - Optimization} (June 25, 2003). Available at SSRN: - \url{http://ssrn.com/abstract=544742} or - \url{http://dx.doi.org/10.2139/ssrn.544742} -} -\seealso{ - CDrawdown.R -} -\keyword{Conditional} -\keyword{Drawdown} -\keyword{models} - +\name{CDD.Opt} +\alias{CDD.Opt} +\title{Chekhlov Conditional Drawdown at Risk Optimization} +\usage{ + CDD.Opt(rmat, alpha = 0.05, rmin = 0, wmin = 0, wmax = 1, + weight.sum = 1) +} +\arguments{ + \item{Ra}{return vector of the portfolio} + + \item{p}{confidence interval} +} +\description{ + A new one-parameter family of risk measures called + Conditional Drawdown (CDD) has been proposed. These + measures of risk are functionals of the portfolio + drawdown (underwater) curve considered in active + portfolio management. For some value of the tolerance + parameter, in the case of a single sample path, drawdown + functional is defined as the mean of the worst (1 - + \eqn{\alpha})% drawdowns. +} +\details{ + This section formulates a portfolio optimization problem + with drawdown risk measure and suggests efficient + optimization techniques for its solving. Optimal asset + allocation considers: \enumerate{ \item Generation of + sample paths for the assets' rates of return. \item + Uncompounded cumulative portfolio rate of return rather + than compounded one. } Given a sample path of + instrument's rates of return (r(1),r(2)...,r(N)), the CDD + functional, \eqn{\delta[\alpha(w)]}, is computed by the + following optimization procedure \deqn{\delta[\alpha(w)] + = min y + [1]/[(1-\alpha)N] \sum [z(k)]} s.t. \deqn{z(k) + greater than u(k)-y } \deqn{u(k) greater than u(k-1)- + r(k)} which leads to a single optimal value of y equal to + \eqn{\epsilon(\alpha)} if \eqn{\pi(\epsilon(\alpha)) > + \alpha}, and to a closed interval of optimal y with the + left endpoint of \eqn{\epsilon(\alpha)} if + \eqn{\pi(\epsilon(\alpha)) = \alpha} +} +\examples{ +library(PerformanceAnalytics) +data(edhec) +CDDopt(edhec) +} +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan +} +\references{ + Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, + Michael, \emph{Drawdown Measure in Portfolio + Optimization} (June 25, 2003). Available at SSRN: + \url{http://ssrn.com/abstract=544742} or + \url{http://dx.doi.org/10.2139/ssrn.544742} +} +\seealso{ + CDrawdown.R +} +\keyword{Conditional} +\keyword{Drawdown} +\keyword{models} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Norm.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,56 +1,56 @@ -\name{CalmarRatio.Norm} -\alias{CalmarRatio.Norm} -\title{Normalized Calmar ratio} -\usage{ - CalmarRatio.Norm(R, tau = 1, scale = NA) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{scale}{number of periods in a year (daily scale = - 252, monthly scale = 12, quarterly scale = 4)} - - \item{excess}{for Sterling Ratio, excess amount to add to - the max drawdown, traditionally and default .1 (10\%)} -} -\description{ - Normalized Calmar and Sterling Ratios are yet another - method of creating a risk-adjusted measure for ranking - investments similar to the Sharpe Ratio. -} -\details{ - Both the Normalized Calmar and the Sterling ratio are the - ratio of annualized return over the absolute value of the - maximum drawdown of an investment. \deqn{Sterling Ratio = - [Return over (0,T)]/[max Drawdown(0,T)]} It is also - \emph{traditional} to use a three year return series for - these calculations, although the functions included here - make no effort to determine the length of your series. - If you want to use a subset of your series, you'll need - to truncate or subset the input data to the desired - length. It is also traditional to use a three year return - series for these calculations, although the functions - included here make no effort to determine the length of - your series. If you want to use a subset of your series, - you'll need to truncate or subset the input data to the - desired length. -} -\examples{ -data(managers) - CalmarRatio.Norm(managers[,1,drop=FALSE]) - CalmarRatio.Norm(managers[,1:6]) -} -\author{ - Brian G. Peterson , Peter Carl , Shubhankit Mohan -} -\references{ - Bacon, Carl, Magdon-Ismail, M. and Amir Atiya,\emph{ - Maximum drawdown. Risk Magazine,} 01 Oct 2004. - \url{http://www.cs.rpi.edu/~magdon/talks/mdd_NYU04.pdf} -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - +\name{CalmarRatio.Norm} +\alias{CalmarRatio.Norm} +\title{Normalized Calmar ratio} +\usage{ + CalmarRatio.Norm(R, tau = 1, scale = NA) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{scale}{number of periods in a year (daily scale = + 252, monthly scale = 12, quarterly scale = 4)} + + \item{excess}{for Sterling Ratio, excess amount to add to + the max drawdown, traditionally and default .1 (10\%)} +} +\description{ + Normalized Calmar and Sterling Ratios are yet another + method of creating a risk-adjusted measure for ranking + investments similar to the Sharpe Ratio. +} +\details{ + Both the Normalized Calmar and the Sterling ratio are the + ratio of annualized return over the absolute value of the + maximum drawdown of an investment. \deqn{Sterling Ratio = + [Return over (0,T)]/[max Drawdown(0,T)]} It is also + \emph{traditional} to use a three year return series for + these calculations, although the functions included here + make no effort to determine the length of your series. + If you want to use a subset of your series, you'll need + to truncate or subset the input data to the desired + length. It is also traditional to use a three year return + series for these calculations, although the functions + included here make no effort to determine the length of + your series. If you want to use a subset of your series, + you'll need to truncate or subset the input data to the + desired length. +} +\examples{ +data(managers) + CalmarRatio.Norm(managers[,1,drop=FALSE]) + CalmarRatio.Norm(managers[,1:6]) +} +\author{ + Brian G. Peterson , Peter Carl , Shubhankit Mohan +} +\references{ + Bacon, Carl, Magdon-Ismail, M. and Amir Atiya,\emph{ + Maximum drawdown. Risk Magazine,} 01 Oct 2004. + \url{http://www.cs.rpi.edu/~magdon/talks/mdd_NYU04.pdf} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Normalized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Normalized.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Normalized.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,7 +0,0 @@ -\name{SterlingRatio.Normalized} -\alias{SterlingRatio.Normalized} -\usage{ - SterlingRatio.Normalized(R, tau = 1, scale = NA, - excess = 0.1) -} - Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,7 +0,0 @@ -\name{SterlingRatio.Normalized} -\alias{SterlingRatio.Normalized} -\usage{ - SterlingRatio.Normalized(R, tau = 1, scale = NA, - excess = 0.1) -} - Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,77 +1,77 @@ -\name{QP.Norm} -\alias{Normalized.CalmarRatio} -\alias{Normalized.SterlingRatio} -\alias{QP.Norm} -\alias{SterlingRatio.Normalized} -\title{QP function fo calculation of Sharpe Ratio} -\usage{ - QP.Norm(R, tau, scale = NA) - - SterlingRatio.Normalized(R, tau = 1, scale = NA, - excess = 0.1) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{scale}{number of periods in a year (daily scale = - 252, monthly scale = 12, quarterly scale = 4)} - - \item{excess}{for Sterling Ratio, excess amount to add to - the max drawdown, traditionally and default .1 (10\%)} -} -\description{ - calculate a Normalized Calmar or Sterling reward/risk - ratio -} -\details{ - Normalized Calmar and Sterling Ratios are yet another - method of creating a risk-adjusted measure for ranking - investments similar to the \code{\link{SharpeRatio}}. - - Both the Normalized Calmar and the Sterling ratio are the - ratio of annualized return over the absolute value of the - maximum drawdown of an investment. The Sterling ratio - adds an excess risk measure to the maximum drawdown, - traditionally and defaulting to 10\%. - - It is also traditional to use a three year return series - for these calculations, although the functions included - here make no effort to determine the length of your - series. If you want to use a subset of your series, - you'll need to truncate or subset the input data to the - desired length. - - Many other measures have been proposed to do similar - reward to risk ranking. It is the opinion of this author - that newer measures such as Sortino's - \code{\link{UpsidePotentialRatio}} or Favre's modified - \code{\link{SharpeRatio}} are both \dQuote{better} - measures, and should be preferred to the Calmar or - Sterling Ratio. -} -\examples{ -data(managers) - Normalized.CalmarRatio(managers[,1,drop=FALSE]) - Normalized.CalmarRatio(managers[,1:6]) - Normalized.SterlingRatio(managers[,1,drop=FALSE]) - Normalized.SterlingRatio(managers[,1:6]) -} -\author{ - Brian G. Peterson -} -\references{ - Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, - Maximum drawdown. Risk Magazine, 01 Oct 2004. -} -\seealso{ - \code{\link{Return.annualized}}, \cr - \code{\link{maxDrawdown}}, \cr - \code{\link{SharpeRatio.modified}}, \cr - \code{\link{UpsidePotentialRatio}} -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - +\name{QP.Norm} +\alias{Normalized.CalmarRatio} +\alias{Normalized.SterlingRatio} +\alias{QP.Norm} +\alias{SterlingRatio.Normalized} +\title{QP function fo calculation of Sharpe Ratio} +\usage{ + QP.Norm(R, tau, scale = NA) + + SterlingRatio.Normalized(R, tau = 1, scale = NA, + excess = 0.1) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{scale}{number of periods in a year (daily scale = + 252, monthly scale = 12, quarterly scale = 4)} + + \item{excess}{for Sterling Ratio, excess amount to add to + the max drawdown, traditionally and default .1 (10\%)} +} +\description{ + calculate a Normalized Calmar or Sterling reward/risk + ratio +} +\details{ + Normalized Calmar and Sterling Ratios are yet another + method of creating a risk-adjusted measure for ranking + investments similar to the \code{\link{SharpeRatio}}. + + Both the Normalized Calmar and the Sterling ratio are the + ratio of annualized return over the absolute value of the + maximum drawdown of an investment. The Sterling ratio + adds an excess risk measure to the maximum drawdown, + traditionally and defaulting to 10\%. + + It is also traditional to use a three year return series + for these calculations, although the functions included + here make no effort to determine the length of your + series. If you want to use a subset of your series, + you'll need to truncate or subset the input data to the + desired length. + + Many other measures have been proposed to do similar + reward to risk ranking. It is the opinion of this author + that newer measures such as Sortino's + \code{\link{UpsidePotentialRatio}} or Favre's modified + \code{\link{SharpeRatio}} are both \dQuote{better} + measures, and should be preferred to the Calmar or + Sterling Ratio. +} +\examples{ +data(managers) + Normalized.CalmarRatio(managers[,1,drop=FALSE]) + Normalized.CalmarRatio(managers[,1:6]) + Normalized.SterlingRatio(managers[,1,drop=FALSE]) + Normalized.SterlingRatio(managers[,1:6]) +} +\author{ + Brian G. Peterson +} +\references{ + Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, + Maximum drawdown. Risk Magazine, 01 Oct 2004. +} +\seealso{ + \code{\link{Return.annualized}}, \cr + \code{\link{maxDrawdown}}, \cr + \code{\link{SharpeRatio.modified}}, \cr + \code{\link{UpsidePotentialRatio}} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/Cdrawdown.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,92 +1,92 @@ -\name{CDDOpt} -\alias{CDDOpt} -\alias{CDrawdown} -\title{Chekhlov Conditional Drawdown at Risk} -\usage{ - CDDOpt(rmat, alpha = 0.05, rmin = 0, wmin = 0, wmax = 1, - weight.sum = 1) - - CDrawdown(R, p = 0.9, ...) -} -\arguments{ - \item{Ra}{return vector of the portfolio} - - \item{p}{confidence interval} - - \item{Ra}{return vector of the portfolio} - - \item{p}{confidence interval} -} -\description{ - A new one-parameter family of risk measures called - Conditional Drawdown (CDD) has been proposed. These - measures of risk are functionals of the portfolio - drawdown (underwater) curve considered in active - portfolio management. For some value of the tolerance - parameter, in the case of a single sample path, drawdown - functional is defineed as the mean of the worst (1 - - \eqn{\alpha})% drawdowns. - - A new one-parameter family of risk measures called - Conditional Drawdown (CDD) has been proposed. These - measures of risk are functionals of the portfolio - drawdown (underwater) curve considered in active - portfolio management. For some value of the tolerance - parameter, in the case of a single sample path, drawdown - functional is defineed as the mean of the worst (1 - - \eqn{\alpha})% drawdowns. -} -\details{ - This section formulates a portfolio optimization problem - with drawdown risk measure and suggests e???cient - optimization techniques for its solving. Optimal asset - allocation considers: 1) Generation of sample paths for - the assets' rates of return. 2) Uncompounded cumulative - portfolio rate of return rather than compounded one. - - The \bold{CDD} is related to Value-at-Risk (VaR) and - Conditional Value-at-Risk (CVaR) measures studied by - Rockafellar and Uryasev . By definition, with respect to - a specified probability level \eqn{\alpha}, the - \bold{\eqn{\alpha}-VaR} of a portfolio is the lowest - amount \eqn{\epsilon} , \eqn{\alpha} such that, with - probability \eqn{\alpha}, the loss will not exceed - \eqn{\epsilon} , \eqn{\alpha} in a specified time T, - whereas the \bold{\eqn{\alpha}-CVaR} is the conditional - expectation of losses above that amount \eqn{\epsilon} . - Various issues about VaR methodology were discussed by - Jorion . The CDD is similar to CVaR and can be viewed as - a modification of the CVaR to the case when the - loss-function is defined as a drawdown. CDD and CVaR are - conceptually related percentile-based risk performance - functionals. -} -\examples{ -library(PerformanceAnalytics) -data(edhec) -CDDopt(edhec) -library(PerformanceAnalytics) -data(edhec) -CDrawdown(edhec) -} -\author{ - Peter Carl, Brian Peterson, Shubhankit Mohan - - Peter Carl, Brian Peterson, Shubhankit Mohan -} -\references{ - DRAWDOWN MEASURE IN PORTFOLIO - OPTIMIZATION,\emph{International Journal of Theoretical - and Applied Finance} ,Fall 1994, 49-58.Vol. 8, No. 1 - (2005) 13-58 - - Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, - Michael, \emph{Drawdown Measure in Portfolio - Optimization} (June 25, 2003). Available at SSRN: - \url{http://ssrn.com/abstract=544742} or - \url{http://dx.doi.org/10.2139/ssrn.544742} -} -\keyword{Conditional} -\keyword{Drawdown} -\keyword{models} - +\name{CDDOpt} +\alias{CDDOpt} +\alias{CDrawdown} +\title{Chekhlov Conditional Drawdown at Risk} +\usage{ + CDDOpt(rmat, alpha = 0.05, rmin = 0, wmin = 0, wmax = 1, + weight.sum = 1) + + CDrawdown(R, p = 0.9, ...) +} +\arguments{ + \item{Ra}{return vector of the portfolio} + + \item{p}{confidence interval} + + \item{Ra}{return vector of the portfolio} + + \item{p}{confidence interval} +} +\description{ + A new one-parameter family of risk measures called + Conditional Drawdown (CDD) has been proposed. These + measures of risk are functionals of the portfolio + drawdown (underwater) curve considered in active + portfolio management. For some value of the tolerance + parameter, in the case of a single sample path, drawdown + functional is defineed as the mean of the worst (1 - + \eqn{\alpha})% drawdowns. + + A new one-parameter family of risk measures called + Conditional Drawdown (CDD) has been proposed. These + measures of risk are functionals of the portfolio + drawdown (underwater) curve considered in active + portfolio management. For some value of the tolerance + parameter, in the case of a single sample path, drawdown + functional is defineed as the mean of the worst (1 - + \eqn{\alpha})% drawdowns. +} +\details{ + This section formulates a portfolio optimization problem + with drawdown risk measure and suggests e???cient + optimization techniques for its solving. Optimal asset + allocation considers: 1) Generation of sample paths for + the assets' rates of return. 2) Uncompounded cumulative + portfolio rate of return rather than compounded one. + + The \bold{CDD} is related to Value-at-Risk (VaR) and + Conditional Value-at-Risk (CVaR) measures studied by + Rockafellar and Uryasev . By definition, with respect to + a specified probability level \eqn{\alpha}, the + \bold{\eqn{\alpha}-VaR} of a portfolio is the lowest + amount \eqn{\epsilon} , \eqn{\alpha} such that, with + probability \eqn{\alpha}, the loss will not exceed + \eqn{\epsilon} , \eqn{\alpha} in a specified time T, + whereas the \bold{\eqn{\alpha}-CVaR} is the conditional + expectation of losses above that amount \eqn{\epsilon} . + Various issues about VaR methodology were discussed by + Jorion . The CDD is similar to CVaR and can be viewed as + a modification of the CVaR to the case when the + loss-function is defined as a drawdown. CDD and CVaR are + conceptually related percentile-based risk performance + functionals. +} +\examples{ +library(PerformanceAnalytics) +data(edhec) +CDDopt(edhec) +library(PerformanceAnalytics) +data(edhec) +CDrawdown(edhec) +} +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan + + Peter Carl, Brian Peterson, Shubhankit Mohan +} +\references{ + DRAWDOWN MEASURE IN PORTFOLIO + OPTIMIZATION,\emph{International Journal of Theoretical + and Applied Finance} ,Fall 1994, 49-58.Vol. 8, No. 1 + (2005) 13-58 + + Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, + Michael, \emph{Drawdown Measure in Portfolio + Optimization} (June 25, 2003). Available at SSRN: + \url{http://ssrn.com/abstract=544742} or + \url{http://dx.doi.org/10.2139/ssrn.544742} +} +\keyword{Conditional} +\keyword{Drawdown} +\keyword{models} + Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EmaxDDGBM.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,23 +0,0 @@ -\name{EMaxDDGBM} -\alias{EMaxDDGBM} -\title{Expected Drawdown using Brownian Motion Assumptions} -\usage{ - EMaxDDGBM(R, digits = 4) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} -} -\description{ - Works on the model specified by Maddon-Ismail -} -\author{ - R -} -\keyword{Assumptions} -\keyword{Brownian} -\keyword{Drawdown} -\keyword{Expected} -\keyword{Motion} -\keyword{Using} - Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd 2013-08-24 23:07:08 UTC (rev 2879) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/GLMSmoothIndex.Rd 2013-08-25 10:42:42 UTC (rev 2880) @@ -1,48 +1,48 @@ -\name{GLMSmoothIndex} -\alias{GLMSmoothIndex} -\alias{Return.Geltner} -\title{GLM Index} -\usage{ - GLMSmoothIndex(R = NULL, ...) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} -} -\description{ - Getmansky Lo Markov Smoothing Index is a useful summary - statistic for measuring the concentration of weights is a - sum of square of Moving Average lag coefficient. This - measure is well known in the industrial organization - literature as the \bold{ Herfindahl index}, a measure of - the concentration of firms in a given industry. The index - is maximized when one coefficient is 1 and the rest are - 0. In the context of smoothed returns, a lower value - implies more smoothing, and the upper bound of 1 implies - no smoothing, hence \eqn{\xi} is reffered as a [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2880 From noreply at r-forge.r-project.org Sun Aug 25 13:40:17 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 25 Aug 2013 13:40:17 +0200 (CEST) Subject: [Returnanalytics-commits] r2881 - pkg/PortfolioAnalytics/vignettes Message-ID: <20130825114018.10C0A184C54@r-forge.r-project.org> Author: braverock Date: 2013-08-25 13:40:17 +0200 (Sun, 25 Aug 2013) New Revision: 2881 Added: pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.Rnw Removed: pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw Log: - update risk budget vignette to us v2 portfolio.spec - rename to better match the new standard Deleted: pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw =================================================================== --- pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw 2013-08-25 10:42:42 UTC (rev 2880) +++ pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw 2013-08-25 11:40:17 UTC (rev 2881) @@ -1,361 +0,0 @@ -\documentclass[a4paper]{article} -\usepackage[round]{natbib} -\usepackage{bm} -\usepackage{verbatim} -\usepackage[latin1]{inputenc} -% \VignetteIndexEntry{Portfolio Optimization with CVaR budgets in PortfolioAnalytics} -\bibliographystyle{abbrvnat} - -\usepackage{url} - -\let\proglang=\textsf -\newcommand{\pkg}[1]{{\fontseries{b}\selectfont #1}} -\newcommand{\R}[1]{{\fontseries{b}\selectfont #1}} -\newcommand{\email}[1]{\href{mailto:#1}{\normalfont\texttt{#1}}} -\newcommand{\E}{\mathsf{E}} -\newcommand{\VAR}{\mathsf{VAR}} -\newcommand{\COV}{\mathsf{COV}} -\newcommand{\Prob}{\mathsf{P}} - -\renewcommand{\topfraction}{0.85} -\renewcommand{\textfraction}{0.1} -\renewcommand{\baselinestretch}{1.5} -\setlength{\textwidth}{15cm} \setlength{\textheight}{22cm} \topmargin-1cm \evensidemargin0.5cm \oddsidemargin0.5cm - -\usepackage[latin1]{inputenc} -% or whatever - -\usepackage{lmodern} -\usepackage[T1]{fontenc} -% Or whatever. Note that the encoding and the font should match. If T1 -% does not look nice, try deleting the line with the fontenc. - -\begin{document} - -\title{Vignette: Portfolio Optimization with CVaR budgets\\ -in PortfolioAnalytics} -\author{Kris Boudt, Peter Carl and Brian Peterson } -\date{June 1, 2010} - -\maketitle -\tableofcontents - - -\bigskip - -\section{General information} - -Risk budgets are a central tool to estimate and manage the portfolio risk allocation. They decompose total portfolio risk into the risk contribution of each position. \citet{ BoudtCarlPeterson2010} propose several portfolio allocation strategies that use an appropriate transformation of the portfolio Conditional Value at Risk (CVaR) budget as an objective or constraint in the portfolio optimization problem. This document explains how risk allocation optimized portfolios can be obtained under general constraints in the \verb"PortfolioAnalytics" package of \citet{PortAnalytics}. - -\verb"PortfolioAnalytics" is designed to provide numerical solutions for portfolio problems with complex constraints and objective sets comprised of any R function. It can e.g.~construct portfolios that minimize a risk objective with (possibly non-linear) per-asset constraints on returns and drawdowns \citep{CarlPetersonBoudt2010}. The generality of possible constraints and objectives is a distinctive characteristic of the package with respect to RMetrics \verb"fPortfolio" of \citet{fPortfolioBook}. For standard Markowitz optimization problems, use of \verb"fPortfolio" rather than \verb"PortfolioAnalytics" is recommended. - -\verb"PortfolioAnalytics" solves the following type of problem -\begin{equation} \min_w g(w) \ \ s.t. \ \ -\left\{ \begin{array}{l} h_1(w)\leq 0 \\ \vdots \\ h_q(w)\leq 0. \end{array} \right. \label{optimproblem}\end{equation} \verb"PortfolioAnalytics" first merges the objective function and constraints into a penalty augmented objective function -\begin{equation} L(w) = g(w) + \mbox{penalty}\sum_{i=1}^q \lambda_i \max(h_i(w),0), \label{eq:constrainedobj} \end{equation} -where $\lambda_i$ is a multiplier to tune the relative importance of the constraints. The default values of penalty and $\lambda_i$ (called \verb"multiplier" in \verb"PortfolioAnalytics") are 10000 and 1, respectively. - -The minimum of this function is found through the \emph{Differential Evolution} (DE) algorithm of \citet{StornPrice1997} and ported to R by \citet{MullenArdiaGilWindoverCline2009}. DE is known for remarkable performance regarding continuous numerical problems \citep{PriceStornLampinen2006}. It has recently been advocated for optimizing portfolios under non-convex settings by \citet{Ardia2010} and \citet{Yollin2009}, among others. We use the R implementation of DE in the \verb"DEoptim" package of \citet{DEoptim}. - -The latest version of the \verb"PortfolioAnalytics" package can be downloaded from R-forge through the following command: -\begin{verbatim} -install.packages("PortfolioAnalytics", repos="http://R-Forge.R-project.org") -\end{verbatim} - -Its principal functions are: -\begin{itemize} -\item \verb"constraint(assets,min,max,min_sum,max_sum)": the portfolio optimization specification starts with specifying the shape of the weight vector through the function \verb"constraint". The weights have to be between \verb"min} and \verb"max" and their sum between \verb"min_sum" and \verb"max_sum". The first argument \verb"assets" is either a number indicating the number of portfolio assets or a vector holding the names of the assets. - -\item \verb"add.objective(constraints, type, name)": \verb"constraints" is a list holding the objective to be minimized and the constraints. New elements to this list are added by the function \verb"add.objective". Many common risk budget objectives and constraints are prespecified and can be identified by specifying the \verb"type" and \verb"name". - - -\item \verb"constrained_objective(w, R, constraints)": given the portfolio weight and return data, it evaluates the penalty augmented objective function in (\ref{eq:constrainedobj}). - -\item \verb"optimize.portfolio(R,constraints)": this function returns the portfolio weight that solves the problem in (\ref{optimproblem}). {\it R} is the multivariate return series of the portfolio components. - -\item \verb"optimize.portfolio.rebalancing(R,constraints,rebalance_on,trailing_periods": this function solves the multiperiod optimization problem. It returns for each rebalancing period the optimal weights and allows the estimation sample to be either from inception or a moving window. - -\end{itemize} - -Next we illustrate these functions on monthly return data for bond, US equity, international equity and commodity indices, which are the first 4 series -in the dataset \verb"indexes". The first step is to load the package \verb"PortfolioAnalytics" and the dataset. An important first note is that some of the functions (especially \verb" optimize.portfolio.rebalancing") requires the dataset to be a \verb"xts" object \citep{xts}. - - -<>= -options(width=80) -@ - -<>=| -library(PortfolioAnalytics) -#source("constrained_objective.R") -data(indexes) -class(indexes) -indexes <- indexes[,1:4] -head(indexes,2) -tail(indexes,2) -@ - -In what follows, we first illustrate the construction of the penalty augmented objective function. Then we present the code for solving the optimization problem. - -\section{Setting of the objective function} - -\subsection{Weight constraints} - -<>=| -Wcons <- constraint( assets = colnames(indexes[,1:4]) ,min = rep(0,4), max=rep(1,4), min_sum=1,max_sum=1 ) -@ - -Given the weight constraints, we can call the value of the function to be minimized. We consider the case of no violation and a case of violation. By default, \verb"normalize=TRUE" which means that if the sum of weights exceeds \verb"max_sum", the weight vector is normalized by multiplying it with \verb"sum(weights)/max_sum" such that the weights evaluated in the objective function satisfy the \verb"max_sum" constraint. -<>=| -constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = Wcons) -constrained_objective_v1( w = rep(1/3,4) , R = indexes[,1:4] , constraints = Wcons) -constrained_objective_v1( w = rep(1/3,4) , R = indexes[,1:4] , constraints = Wcons, normalize=FALSE) -@ - -The latter value can be recalculated as penalty times the weight violation, that is: $10000 \times 1/3.$ - -\subsection{Minimum CVaR objective function} - -Suppose now we want to find the portfolio that minimizes the 95\% portfolio CVaR subject to the weight constraints listed above. - -<>=| -ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", -arguments=list(p=0.95), enabled=TRUE) -@ - -The value of the objective function is: -<>=| -constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ -This is the CVaR of the equal-weight portfolio as computed by the function \verb"ES" in the \verb"PerformanceAnalytics" package of \citet{ Carl2007} -<>=| -library(PerformanceAnalytics) -out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95, portfolio_method="component") -out$MES -@ -All arguments in the function \verb"ES" can be passed on through \verb"arguments". E.g. to reduce the impact of extremes on the portfolio results, it is recommended to winsorize the data using the option clean="boudt". - -<>=| -out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", portfolio_method="component") -out$MES -@ - - - -For the formulation of the objective function, this implies setting: -<>=| -ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", -arguments=list(p=0.95,clean="boudt"), enabled=TRUE) -constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ - -An additional argument that is not available for the moment in \verb"ES" is to estimate the conditional covariance matrix trough -the constant conditional correlation model of \citet{Bollerslev90}. - -For the formulation of the objective function, this implies setting: -<>=| -ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", -arguments=list(p=0.95,clean="boudt"), enabled=TRUE, garch=TRUE) -constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ - -\subsection{Minimum CVaR concentration objective function} - -Add the minimum 95\% CVaR concentration objective to the objective function: -<>=| -ObjSpec = add.objective_v1( constraints = Wcons , type="risk_budget_objective",name="CVaR", -arguments=list(p=0.95,clean="boudt"), min_concentration=TRUE, enabled=TRUE) -@ -The value of the objective function is: -<>=| -constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ -We can verify that this is effectively the largest CVaR contribution of that portfolio as follows: -<>=| -ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", portfolio_method="component") -@ - -\subsection{Risk allocation constraints} - -We see that in the equal-weight portfolio, the international equities and commodities investment -cause more than 30\% of total risk. We could specify as a constraint that no asset can contribute -more than 30\% to total portfolio risk. This involves the construction of the following objective function: - -<>=| -ObjSpec = add.objective_v1( constraints = Wcons , type="risk_budget_objective",name="CVaR", max_prisk = 0.3, -arguments=list(p=0.95,clean="boudt"), enabled=TRUE) -constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) -@ - -This value corresponds to the penalty parameter which has by default the value of 10000 times the exceedances: $ 10000*(0.045775103+0.054685023)\approx 1004.601.$ - -\section{Optimization} - -The penalty augmented objective function is minimized through Differential Evolution. Two parameters are crucial in tuning the optimization: \verb"search_size" and \verb"itermax". The optimization routine -\begin{enumerate} -\item First creates the initial generation of \verb"NP= search_size/itermax" guesses for the optimal value of the parameter vector, using the \verb"random_portfolios" function generating random weights satisfying the weight constraints. -\item Then DE evolves over this population of candidate solutions using alteration and selection operators in order to minimize the objective function. It restarts \verb"itermax" times. -\end{enumerate} It is important that \verb"search_size/itermax" is high enough. It is generally recommended that this ratio is at least ten times the length of the weight vector. For more details on the use of DE strategy in portfolio allocation, we refer the -reader to \citet{Ardia2010}. - -\subsection{Minimum CVaR portfolio under an upper 40\% CVaR allocation constraint} - -The functions needed to obtain the minimum CVaR portfolio under an upper 40\% CVaR allocation constraint are the following: -\begin{verbatim} -> ObjSpec <- constraint(assets = colnames(indexes[,1:4]),min = rep(0,4), -+ max=rep(1,4), min_sum=1,max_sum=1 ) -> ObjSpec <- add.objective_v1( constraints = ObjSpec, type="risk", -+ name="CVaR", arguments=list(p=0.95,clean="boudt"),enabled=TRUE) -> ObjSpec <- add.objective_v1( constraints = ObjSpec, -+ type="risk_budget_objective", name="CVaR", max_prisk = 0.4, -+ arguments=list(p=0.95,clean="boudt"), enabled=TRUE) -> set.seed(1234) -> out = optimize.portfolio_v1(R= indexes[,1:4],constraints=ObjSpec, -+ optimize_method="DEoptim",itermax=10, search_size=2000) -\end{verbatim} -After the call to these functions it starts to explore the feasible space iteratively: -\begin{verbatim} -Iteration: 1 bestvalit: 0.029506 bestmemit: 0.810000 0.126000 0.010000 0.140000 -Iteration: 2 bestvalit: 0.029506 bestmemit: 0.810000 0.126000 0.010000 0.140000 -Iteration: 3 bestvalit: 0.029272 bestmemit: 0.758560 0.079560 0.052800 0.112240 -Iteration: 4 bestvalit: 0.029272 bestmemit: 0.758560 0.079560 0.052800 0.112240 -Iteration: 5 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 -Iteration: 6 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 -Iteration: 7 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 -Iteration: 8 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 -Iteration: 9 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 -Iteration: 10 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 -elapsed time:1.85782111114926 -\end{verbatim} - -If \verb"TRACE=FALSE" the only output in \verb"out" is the weight vector that optimizes the objective function. - -\begin{verbatim} -> out[[1]] - US Bonds US Equities Int'l Equities Commodities - 0.77530240 0.03201150 0.11247491 0.08021119 \end{verbatim} - -If \verb"TRACE=TRUE" additional information is given such as the value of the objective function and the different constraints. - -\subsection{Minimum CVaR concentration portfolio} - -The functions needed to obtain the minimum CVaR concentration portfolio are the following: - -\begin{verbatim} -> ObjSpec <- constraint(assets = colnames(indexes[,1:4]) ,min = rep(0,4), -+ max=rep(1,4), min_sum=1,max_sum=1 ) -> ObjSpec <- add.objective_v1( constraints = ObjSpec, -+ type="risk_budget_objective", name="CVaR", -+ arguments=list(p=0.95,clean="boudt"), -+ min_concentration=TRUE,enabled=TRUE) -> set.seed(1234) -> out = optimize.portfolio_v1(R= indexes[,1:4],constraints=ObjSpec, -+ optimize_method="DEoptim",itermax=50, search_size=5000) -\end{verbatim} -The iterations are as follows: -\begin{verbatim} -Iteration: 1 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 2 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 3 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 4 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 5 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 45 bestvalit: 0.008209 bestmemit: 0.976061 0.151151 0.120500 0.133916 -Iteration: 46 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -Iteration: 47 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -Iteration: 48 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -Iteration: 49 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -Iteration: 50 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 -elapsed time:4.1324522222413 -\end{verbatim} -This portfolio has the equal risk contribution characteristic: -\begin{verbatim} -> out[[1]] - US Bonds US Equities Int'l Equities Commodities - 0.70528537 0.11118139 0.08610905 0.09742419 -> ES(indexes[,1:4],weights = out[[1]],p=0.95,clean="boudt", -+ portfolio_method="component") -$MES - [,1] -[1,] 0.03246264 - -$contribution - US Bonds US Equities Int'l Equities Commodities - 0.008169565 0.008121930 0.008003228 0.008167917 - -$pct_contrib_MES - US Bonds US Equities Int'l Equities Commodities - 0.2516605 0.2501931 0.2465366 0.2516098 \end{verbatim} - - - - -\subsection{Dynamic optimization} - -Dynamic rebalancing of the risk budget optimized portfolio is possible through the function \verb"optimize.portfolio.rebalancing". Additional arguments are \verb"rebalance\_on} which indicates the rebalancing frequency (years, quarters, months). The estimation is either done from inception (\verb"trailing\_periods=0") or through moving window estimation, where each window has \verb"trailing_periods" observations. The minimum number of observations in the estimation sample is specified by \verb"training_period". Its default value is 36, which corresponds to three years for monthly data. - -As an example, consider the minimum CVaR concentration portfolio, with estimation from in inception and monthly rebalancing. Since we require a minimum estimation length of total number of observations -1, we can optimize the portfolio only for the last two months. - -\begin{verbatim} -> set.seed(1234) -> out = optimize.portfolio.rebalancing_v1(R= indexes,constraints=ObjSpec, rebalance_on ="months", -+ optimize_method="DEoptim",itermax=50, search_size=5000, training_period = nrow(indexes)-1 ) -\end{verbatim} - -For each of the optimization, the iterations are given as intermediate output: -\begin{verbatim} -Iteration: 1 bestvalit: 0.010655 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 2 bestvalit: 0.010655 bestmemit: 0.800000 0.100000 0.118000 0.030000 -Iteration: 49 bestvalit: 0.008207 bestmemit: 0.787525 0.124897 0.098001 0.108258 -Iteration: 50 bestvalit: 0.008195 bestmemit: 0.774088 0.122219 0.095973 0.104338 -elapsed time:4.20546416666773 -Iteration: 1 bestvalit: 0.011006 bestmemit: 0.770000 0.050000 0.090000 0.090000 -Iteration: 2 bestvalit: 0.010559 bestmemit: 0.498333 0.010000 0.070000 0.080000 -Iteration: 49 bestvalit: 0.008267 bestmemit: 0.828663 0.126173 0.100836 0.114794 -Iteration: 50 bestvalit: 0.008267 bestmemit: 0.828663 0.126173 0.100836 0.114794 -elapsed time:4.1060591666566 -overall elapsed time:8.31152777777778 -\end{verbatim} -The output is a list holding for each rebalancing period the output of the optimization, such as portfolio weights. -\begin{verbatim} -> out[[1]]$weights - US Bonds US Equities Int'l Equities Commodities - 0.70588695 0.11145087 0.08751686 0.09514531 -> out[[2]]$weights - US Bonds US Equities Int'l Equities Commodities - 0.70797640 0.10779728 0.08615059 0.09807574 -\end{verbatim} -But also the value of the objective function: -\begin{verbatim} -> out[[1]]$out -[1] 0.008195072 -> out[[2]]$out -[1] 0.008266844 -\end{verbatim} -The first and last observation from the estimation sample: -\begin{verbatim} -> out[[1]]$data_summary -$first - US Bonds US Equities Int'l Equities Commodities -1980-01-31 -0.0272 0.061 0.0462 0.0568 - -$last - US Bonds US Equities Int'l Equities Commodities -2009-11-30 0.0134 0.0566 0.0199 0.015 - -> out[[2]]$data_summary -$first - US Bonds US Equities Int'l Equities Commodities -1980-01-31 -0.0272 0.061 0.0462 0.0568 - -$last - US Bonds US Equities Int'l Equities Commodities -2009-12-31 -0.0175 0.0189 0.0143 0.0086 -\end{verbatim} - -Of course, DE is a stochastic optimizaer and typically will only find a near-optimal solution that depends on the seed. The function \verb"optimize.portfolio.parallel" in \verb"PortfolioAnalytics" allows to run an arbitrary number of portfolio sets in parallel in order to develop "confidence bands" around your solution. It is based on Revolution's \verb"foreach" package \citep{foreach}. - -\bibliography{PA} - - -\end{document} - Copied: pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.Rnw (from rev 2880, pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw) =================================================================== --- pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.Rnw (rev 0) +++ pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.Rnw 2013-08-25 11:40:17 UTC (rev 2881) @@ -0,0 +1,365 @@ +\documentclass[a4paper]{article} +\usepackage[round]{natbib} +\usepackage{bm} +\usepackage{verbatim} +\usepackage[latin1]{inputenc} +% \VignetteIndexEntry{Portfolio Optimization with CVaR budgets in PortfolioAnalytics} +\bibliographystyle{abbrvnat} + +\usepackage{url} + +\let\proglang=\textsf +\newcommand{\pkg}[1]{{\fontseries{b}\selectfont #1}} +\newcommand{\R}[1]{{\fontseries{b}\selectfont #1}} +\newcommand{\email}[1]{\href{mailto:#1}{\normalfont\texttt{#1}}} +\newcommand{\E}{\mathsf{E}} +\newcommand{\VAR}{\mathsf{VAR}} +\newcommand{\COV}{\mathsf{COV}} +\newcommand{\Prob}{\mathsf{P}} + +\renewcommand{\topfraction}{0.85} +\renewcommand{\textfraction}{0.1} +\renewcommand{\baselinestretch}{1.5} +\setlength{\textwidth}{15cm} \setlength{\textheight}{22cm} \topmargin-1cm \evensidemargin0.5cm \oddsidemargin0.5cm + +\usepackage[latin1]{inputenc} +% or whatever + +\usepackage{lmodern} +\usepackage[T1]{fontenc} +% Or whatever. Note that the encoding and the font should match. If T1 +% does not look nice, try deleting the line with the fontenc. + +\begin{document} + +\title{Vignette: Portfolio Optimization with CVaR budgets\\ +in PortfolioAnalytics} +\author{Kris Boudt, Peter Carl and Brian Peterson } +\date{June 1, 2010} + +\maketitle +\tableofcontents + + +\bigskip + +\section{General information} + +Risk budgets are a central tool to estimate and manage the portfolio risk allocation. They decompose total portfolio risk into the risk contribution of each position. \citet{ BoudtCarlPeterson2010} propose several portfolio allocation strategies that use an appropriate transformation of the portfolio Conditional Value at Risk (CVaR) budget as an objective or constraint in the portfolio optimization problem. This document explains how risk allocation optimized portfolios can be obtained under general constraints in the \verb"PortfolioAnalytics" package of \citet{PortAnalytics}. + +\verb"PortfolioAnalytics" is designed to provide numerical solutions for portfolio problems with complex constraints and objective sets comprised of any R function. It can e.g.~construct portfolios that minimize a risk objective with (possibly non-linear) per-asset constraints on returns and drawdowns \citep{CarlPetersonBoudt2010}. The generality of possible constraints and objectives is a distinctive characteristic of the package with respect to RMetrics \verb"fPortfolio" of \citet{fPortfolioBook}. For standard Markowitz optimization problems, use of \verb"fPortfolio" rather than \verb"PortfolioAnalytics" is recommended. + +\verb"PortfolioAnalytics" solves the following type of problem +\begin{equation} \min_w g(w) \ \ s.t. \ \ +\left\{ \begin{array}{l} h_1(w)\leq 0 \\ \vdots \\ h_q(w)\leq 0. \end{array} \right. \label{optimproblem}\end{equation} \verb"PortfolioAnalytics" first merges the objective function and constraints into a penalty augmented objective function +\begin{equation} L(w) = g(w) + \mbox{penalty}\sum_{i=1}^q \lambda_i \max(h_i(w),0), \label{eq:constrainedobj} \end{equation} +where $\lambda_i$ is a multiplier to tune the relative importance of the constraints. The default values of penalty and $\lambda_i$ (called \verb"multiplier" in \verb"PortfolioAnalytics") are 10000 and 1, respectively. + +The minimum of this function is found through the \emph{Differential Evolution} (DE) algorithm of \citet{StornPrice1997} and ported to R by \citet{MullenArdiaGilWindoverCline2009}. DE is known for remarkable performance regarding continuous numerical problems \citep{PriceStornLampinen2006}. It has recently been advocated for optimizing portfolios under non-convex settings by \citet{Ardia2010} and \citet{Yollin2009}, among others. We use the R implementation of DE in the \verb"DEoptim" package of \citet{DEoptim}. + +The latest version of the \verb"PortfolioAnalytics" package can be downloaded from R-forge through the following command: +\begin{verbatim} +install.packages("PortfolioAnalytics", repos="http://R-Forge.R-project.org") +\end{verbatim} + +Its principal functions are: +\begin{itemize} +\item \verb"constraint(assets,min,max,min_sum,max_sum)": the portfolio optimization specification starts with specifying the shape of the weight vector through the function \verb"constraint". The weights have to be between \verb"min} and \verb"max" and their sum between \verb"min_sum" and \verb"max_sum". The first argument \verb"assets" is either a number indicating the number of portfolio assets or a vector holding the names of the assets. + +\item \verb"add.objective(constraints, type, name)": \verb"constraints" is a list holding the objective to be minimized and the constraints. New elements to this list are added by the function \verb"add.objective". Many common risk budget objectives and constraints are prespecified and can be identified by specifying the \verb"type" and \verb"name". + + +\item \verb"constrained_objective(w, R, constraints)": given the portfolio weight and return data, it evaluates the penalty augmented objective function in (\ref{eq:constrainedobj}). + +\item \verb"optimize.portfolio(R,constraints)": this function returns the portfolio weight that solves the problem in (\ref{optimproblem}). {\it R} is the multivariate return series of the portfolio components. + +\item \verb"optimize.portfolio.rebalancing(R,constraints,rebalance_on,trailing_periods": this function solves the multiperiod optimization problem. It returns for each rebalancing period the optimal weights and allows the estimation sample to be either from inception or a moving window. + +\end{itemize} + +Next we illustrate these functions on monthly return data for bond, US equity, international equity and commodity indices, which are the first 4 series +in the dataset \verb"indexes". The first step is to load the package \verb"PortfolioAnalytics" and the dataset. An important first note is that some of the functions (especially \verb" optimize.portfolio.rebalancing") requires the dataset to be a \verb"xts" object \citep{xts}. + + +<>= +options(width=80) +@ + +<>=| +library(PortfolioAnalytics) +data(indexes) +class(indexes) +indexes <- indexes[,1:4] +head(indexes,2) +tail(indexes,2) +@ + +In what follows, we first illustrate the construction of the penalty augmented objective function. Then we present the code for solving the optimization problem. + +\section{Setting of the objective function} + +\subsection{Weight constraints} + +<>=| +# create the portfolio specification object +Wcons <- portfolio.spec( assets = colnames(indexes[,1:4]) ) +# Add box constraints +Wcons <- add.constraint( portfolio=Wcons, type='box', min = 0, max=1 ) +# Add the full investment constraint that specifies the weights must sum to 1. +Wcons <- add.constraint( portfolio=Wcons, type="full_investment") +@ + +Given the weight constraints, we can call the value of the function to be minimized. We consider the case of no violation and a case of violation. By default, \verb"normalize=TRUE" which means that if the sum of weights exceeds \verb"max_sum", the weight vector is normalized by multiplying it with \verb"sum(weights)/max_sum" such that the weights evaluated in the objective function satisfy the \verb"max_sum" constraint. +<>=| +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , portfolio = Wcons) +constrained_objective( w = rep(1/3,4) , R = indexes[,1:4] , portfolio = Wcons) +constrained_objective( w = rep(1/3,4) , R = indexes[,1:4] , portfolio = Wcons, normalize=FALSE) +@ + +The latter value can be recalculated as penalty times the weight violation, that is: $10000 \times 1/3.$ + +\subsection{Minimum CVaR objective function} + +Suppose now we want to find the portfolio that minimizes the 95\% portfolio CVaR subject to the weight constraints listed above. + +<>=| +ObjSpec = add.objective( portfolio = Wcons , type="risk",name="CVaR", +arguments=list(p=0.95), enabled=TRUE) +@ + +The value of the objective function is: +<>=| +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , portfolio = ObjSpec) +@ +This is the CVaR of the equal-weight portfolio as computed by the function \verb"ES" in the \verb"PerformanceAnalytics" package of \citet{ Carl2007} +<>=| +library(PerformanceAnalytics) +out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95, portfolio_method="component") +out$MES +@ +All arguments in the function \verb"ES" can be passed on through \verb"arguments". E.g. to reduce the impact of extremes on the portfolio results, it is recommended to winsorize the data using the option clean="boudt". + +<>=| +out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", portfolio_method="component") +out$MES +@ + + + +For the formulation of the objective function, this implies setting: +<>=| +ObjSpec = add.objective( portfolio = Wcons , type="risk",name="CVaR", +arguments=list(p=0.95,clean="boudt"), enabled=TRUE) +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , portfolio = ObjSpec) +@ + +An additional argument that is not available for the moment in \verb"ES" is to estimate the conditional covariance matrix trough +the constant conditional correlation model of \citet{Bollerslev90}. + +For the formulation of the objective function, this implies setting: +<>=| +ObjSpec = add.objective( portfolio = Wcons , type="risk",name="CVaR", +arguments=list(p=0.95,clean="boudt"), enabled=TRUE, garch=TRUE) +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , portfolio = ObjSpec) +@ + +\subsection{Minimum CVaR concentration objective function} + +Add the minimum 95\% CVaR concentration objective to the objective function: +<>=| +ObjSpec = add.objective( portfolio = Wcons , type="risk_budget_objective",name="CVaR", +arguments=list(p=0.95,clean="boudt"), min_concentration=TRUE, enabled=TRUE) +@ +The value of the objective function is: +<>=| +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , portfolio = ObjSpec) +@ +We can verify that this is effectively the largest CVaR contribution of that portfolio as follows: +<>=| +ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", portfolio_method="component") +@ + +\subsection{Risk allocation constraints} + +We see that in the equal-weight portfolio, the international equities and commodities investment +cause more than 30\% of total risk. We could specify as a constraint that no asset can contribute +more than 30\% to total portfolio risk. This involves the construction of the following objective function: + +<>=| +ObjSpec = add.objective( portfolio = Wcons , type="risk_budget_objective",name="CVaR", max_prisk = 0.3, +arguments=list(p=0.95,clean="boudt"), enabled=TRUE) +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , portfolio = ObjSpec) +@ + +This value corresponds to the penalty parameter which has by default the value of 10000 times the exceedances: $ 10000*(0.045775103+0.054685023)\approx 1004.601.$ + +\section{Optimization} + +The penalty augmented objective function is minimized through Differential Evolution. Two parameters are crucial in tuning the optimization: \verb"search_size" and \verb"itermax". The optimization routine +\begin{enumerate} +\item First creates the initial generation of \verb"NP= search_size/itermax" guesses for the optimal value of the parameter vector, using the \verb"random_portfolios" function generating random weights satisfying the weight constraints. +\item Then DE evolves over this population of candidate solutions using alteration and selection operators in order to minimize the objective function. It restarts \verb"itermax" times. +\end{enumerate} It is important that \verb"search_size/itermax" is high enough. It is generally recommended that this ratio is at least ten times the length of the weight vector. For more details on the use of DE strategy in portfolio allocation, we refer the +reader to \citet{Ardia2010}. + +\subsection{Minimum CVaR portfolio under an upper 40\% CVaR allocation constraint} + [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2881 From noreply at r-forge.r-project.org Sun Aug 25 14:00:36 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 25 Aug 2013 14:00:36 +0200 (CEST) Subject: [Returnanalytics-commits] r2882 - in pkg/PortfolioAnalytics: . vignettes Message-ID: <20130825120036.2DB41185721@r-forge.r-project.org> Author: braverock Date: 2013-08-25 14:00:35 +0200 (Sun, 25 Aug 2013) New Revision: 2882 Added: pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.pdf Modified: pkg/PortfolioAnalytics/.Rbuildignore pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.Rnw Log: - minor updates to risk budgets vignette, add the pdf Modified: pkg/PortfolioAnalytics/.Rbuildignore =================================================================== --- pkg/PortfolioAnalytics/.Rbuildignore 2013-08-25 11:40:17 UTC (rev 2881) +++ pkg/PortfolioAnalytics/.Rbuildignore 2013-08-25 12:00:35 UTC (rev 2882) @@ -2,3 +2,4 @@ .metadata .svn ^\.Rproj\.user$ +^.*\.Rproj$ Modified: pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.Rnw =================================================================== --- pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.Rnw 2013-08-25 11:40:17 UTC (rev 2881) +++ pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.Rnw 2013-08-25 12:00:35 UTC (rev 2882) @@ -31,6 +31,7 @@ % does not look nice, try deleting the line with the fontenc. \begin{document} +\SweaveOpts{concordance=TRUE} \title{Vignette: Portfolio Optimization with CVaR budgets\\ in PortfolioAnalytics} Added: pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.pdf =================================================================== --- pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.pdf (rev 0) +++ pkg/PortfolioAnalytics/vignettes/risk_budget_optimization.pdf 2013-08-25 12:00:35 UTC (rev 2882) @@ -0,0 +1,1902 @@ +%PDF-1.5 +%???? +1 0 obj << +/Length 372 +>> +stream +concordance:risk_budget_optimization.tex:risk_budget_optimization.Rnw:1 32 1 1 0 51 1 1 4 1 2 1 0 2 1 5 0 2 1 7 0 1 1 8 0 1 2 6 1 1 3 2 0 1 2 1 0 1 2 4 0 1 2 1 1 1 2 6 0 1 1 5 0 1 1 6 0 1 2 6 1 1 3 5 0 1 2 1 1 1 2 8 0 2 2 1 0 2 1 7 0 1 2 1 1 1 2 1 0 1 1 7 0 1 2 3 1 1 3 2 0 1 1 7 0 1 2 4 1 1 3 2 0 1 1 7 0 1 2 3 1 1 3 5 0 2 2 7 0 2 2 17 0 1 2 6 1 1 3 2 0 1 1 6 0 1 2 172 1 +endstream +endobj +4 0 obj << +/Length 1596 +/Filter /FlateDecode +>> +stream +x??XIo?F??W?R???pf?!{)bwAS +\#=?=??$?(?T???mCR??-,?3o???ux~u???Y2J?2I?FW?Q??J??(+r;W?????8???????m?o???.z;6YT?????o}?HZ?c???Wx??\[-???l????FG???.?????Qy?o?????) +'????????Q??Ql???g?=K?j??f(?&?|?F +p?.h5!??AlU??1?m?r? ?????%??????? +??S?? +P?-_??-)CD%??g????Nz ?m9 #??N%?$??=?!??-Z????X? `N?? +??m)o?}@C?&h?'?????????X???Ds?C????? V????{\R(?sp???e?????l???3v???*?1?O?c?9??x9n?t%[N%"`oO +??R? ????A ?&????Tp?XS?W +6dCG~?7??h`?]? v??~?w4@?hp?r? ????B?u????????8=hJ1H??&??E?a`???Q?c???H??c?NB??/<7?\ +M_???4??B?r?a????M*?8???W?i??_?0GY? ??Me+s(?]O?zU??y??Z?4??P?t???M?`E? iF ??f?q??????U|*?E|??)???aaX??Y?]?P????/?=??8???}iY??c?^??C????r???^5pH#??????9??Z9+? B???)???N4<&L?A???nM???}?????(??N h??? ??^??^???w??????|???z??&?x??.?f?W?Z?Ab? ?.??????? RB*??1??D???e???+A???A7g?^b?J??5??V?w?%Sx?? i?{a?/n?? ??K????#?? ?+??+????L?VT^5?u??? +?Vb?:??x?>?? G??~??N.?$P???^N???? ?%?z?\???j?[c!?'? _?J?^??????.????|n?Q??????0?R?????R?#l?X?(&??oeF??????I?J?-e?ns?~?? +? ?-??????O!??r???> +stream +x??ko???{~??Oz?p|??H? ??)?????-??r??Hr????\??;'?;?'rwvvv?3??\?????+_?rq}??K?+?? +??6???????Rg??e?r???[x;?????-????'? +S??:|,?7??4??H6?P?qV?^?a?w?!?u??N???:Aa)?? H?????)dP????!??=???5??y????Yw"???????0_???+?G ? +XX?m?;d+??6?@??e?!p?e?????gG\??U#???+^?;???.?ZcTY??/k ?|??j]?? +???????*???????+??F??"?Y????2?2vlbW?ZVY????<J??% ?I?Ny???OD????{???]o?D?????|G???Y???C^p?I??$??Q2X|D??A????#b?U??+?+?k&i???bQ*_?< +U??,V ? ?(P.???y???6?] ?{?' ?#???z?~L?????NQ??"?=??.W d?????? ??? X_[P?c?cp???????|??B(:???@????J??NF?i?k??!?????=??CV??Sb?A_?????s2y ?8?a?=?S?qxHsI?r + t?0??(??x??q?t?????? +????=??s???y???-+0F ?FifwHy +~q?!*DvI??P|???/StUJr~&? ??c??$X?ofQ*??#)J?1w ?? Y??f?BF? "????z!????8@9??!???j?s?K???`?Z?N?^?IXC?=?6?1????O??.? D?b???oj[???>??}?o?a???????+'XF:???4T.[?g?S????x??6?mVG??mR?G???Q? ?xY?s??=:????u???o??(???:t?????O??v????2????[??@?q9.??U:?"?|??=sQX??????l?Wf?Wx?64?? +Cl?????]Z??aft?tf=??n????2!#3??S????\v????I??y?,????@???????F??S?[????l??N(????_?e??????l??z???rhk?_?a??(??? +?z??Ta??z??????}???????????1?k?K? +?>|??-?U?p?k??W5?z??o?u???>???]??g?????L^? ?+>:?6???????,????}?@65(n?:??S?H]???? +?????0?4????6?*??`U:j??i????6}? I\?:UW?i?6'?? +F>iSq%???P??w????J?&?G4?M??????th\?Q?3???)?>?G&??R93qU?gP????f?N?????X??m?A {??/o??w???Dh?F??X/???!9?H?rL2????A*g???C??*y?|x9}???Ve]Ot??j?g7|????:RN'??"????3??Mj????\cq?'?/?'?????QGU?E(x??7h\??:??r??x?u?|?W??a????m??? bb??D?ko?C=?????i?>2?_??sU?????(?-W7|yh??0C'?????o???1}??3a8-?J^9?(??ro??_???????????4?i????????!???> +stream +x??XK??6??W?T ?Y>? ?I???(??????????????;/R?,???Qr8??f8?W?g??\?I????????h??)'EfTj?d??|H>Z?/>?~??:?K???9? ????????v?jh+o?-?}??W????\???{??3 +?h?????m?=g??????D?[]????/.?)??a l?4???????D??[?I????Zvl?nA\\?7? ?y?a?????? +??C?eTK???% +:?z???P?s,????:g???"?????? +?h.Q?O?????y?(??(0F<Lh?@?i?? ?Y??,?q?mLm?La?1?%??D?b?MCS??J??????????WQ?z+? C?g??0????2?U?p?\?d???k??6??g????????1?? +?????H-s?e??+ ~???d????V?A??7/xoG@?+?a?9eq3???T????????JO_???y'?t???????M?;????%Z?O{???-????.^?k:??????-??7tm??????WBo$?B????7??].?????????\D?,Gmd?????=B?+s?L??e???_3????7?zF? ?z??n?vc?Eb???H??? +??f?C?? COTpw?,ys`? ????`?}?S???h?])5??3-8!1?K??????X? +?I??@Eb"??J????-N~??K???v?c? y_on??dq??+9M?? +R?x?Z ?n?^s?};???U5O?x??;-?h??S|B??C?????k?(?Yx?z?+????_|???&E??(?$e???M6maK????Jap?????P??fb???-?&y???4Nk(/)]"?Vqmx?$0`?o?TUi*j? :?z??????>???????`??P{??i???!???;?wV?|~?j??B???????C??]?[??-\? q??}|G8 I{???????%??\ei?Zf?6???$ g8???[?V?e?/??t??*?? ?n??b=iKA??????4c? ??????v?%?R?P w?h>??#??h??x??}??8???[&SL???ZRm??NQv? ?????@}>?C?????T?+R$Y???|e??An???q?G??wK??:0??_?{|?:?'???0????n??????s +sZB"?????4$??H?ZJ?6?R???? u!{{Zmpo?6??=L?????Ew?h?k??o{?N??oB-???=?????~7&????????$?o????x:q(??4F??f????4Y??F???PG?E?)??? ?B???-???d=Z*??u???1???J?}\?\Z5????????Bk ???W??\?: ??o???????j*??Q?5Y?7?Z?H?G??:6#k??O j?pG#?????d?F?M???g?t?? H??po?R?*zO_?S`V?CV\bS?` ?L+5L??Z?S|jx?|?i????bk?? +?Qp??????0?{???3????a??v'^? /?u? +?$Q???G?:C??E}??????r?=???"??????{y-????R(????PE??= +??7| k?2??L?L&?QD|?z??Op ?? +endstream +endobj +31 0 obj << +/Length 1679 +/Filter /FlateDecode +>> +stream +x??Y??7??_???Q/fm???z? +??U?#???^?wz?D????????l _? +?/^{<??<?t????Bd&t????bP?BzP)2 #???D +GR*??j?2Y??????t?%? ?????&Y?G?J>?P3???C?????MK? ????? )????R???NEQ@??Rd??a??fv?;5D|'?? +%???????e??M?rP?*W9??????=~??????3?E??aVk??S???s ?S??_??o?P?%3????N?Mf???e?=??1Z??????????0 +?????_?;:T^???+???????????>?A{?g???3?????pz?S??[h'??2y-????J????k? +?;????9Kv???]???:????o??Ty?e ??kP?!F;?????I??Cd?S?Ut%?1??c????sg?P??U?2?F???'U9q???) +g???????H?|?:[??:'??5 +9??F?\Gx&c????u???h??$??Y?????B??@??>{??}???????"]&?0?`\?|????ui?? 9?z&?,?"?9??%=U2?QV?????n? +ch?+?\?0?1{?(?av?N??q!1??-/?????1R?M???M??v?H?Q?>E?/? ?]?n???of??'?E?U)????^M?????ag.? ?'k?I? +? +(7??$?*?5???XSV????r?Hm?]m??_oO??9?>??S[?-? i?@?LHH???????r@??s ? ?????B??\y?|w????7??J?C?D?H??5???h8 ??`??w???????+T ?_????{?X??zt"M%?????|????[??d: +Y??aw? +p?p? ?????]????En?u?S?;G???#??u??-???X?_?rf>#?*??]?????U^?0?q' ??)?~???H&?#p?T??????s!I?????fph?H?D??^??SZ?1?so??u?1\'Dj?e?4v0??9??}]?o???]}?$?r?D?@?eHlv???#z???)H???]?? +{?L?[8?X??w????6??4?????ls?9?p9>?yX8???=:9?*??`???x????RW?o? ?l???A?? ????b??X??q???C?@???3?'}?|?a+ ?Hg?????O?[?_??j??0+???g??Uf!|v?{#?;?s?????????d????)$?h?;?pU???\???????#?fc-?f?O??s??gnuRv???#V'??2=)???O??eBeT???6???,K>6?tAJ?_t??y ???5??? ??h?8??7S??Ip??t^&???????J?.??*? _??c7?????y?n?????UG? ?k?;??=?9? +?p]??y?X1??H?u?K?NY?4?>\ |??\??-}?yd?k??b???????????b_AG??#Qk?? ???nR?????w/???}@P??????DZ??3?{?|?????t +endstream +endobj +35 0 obj << +/Length 1380 +/Filter /FlateDecode +>> +stream +x??X?o?6?_a{??X??U??"}0?H??!+ +Yrw???r??????(???,E?]hQ??xw???G??=?E??14???????D?0M?h"P\Y?S~7??RtC?!??????@ ,???&2 ??V?????6??????????}}?a???8?7??A??_??B+j????e +C???r,3???W?H???? +/?UQl?PiVW? ??????eN??c0???K??8Qqf*?M YQb???IP, 6J)?/?`?5-A/ES??[?Oi??@?h??(?,?Q?CW????8?,???f?????????|??Z?-???Zd?l? ??????%?R????D?>:_@??R????p?????H??_??????Q?sVv?Or????Kh?9c??*????zAt???'???K?^??g@?rNg?s??l(??????.??H?e(#??E??????Y????Oj????b?@7V?}@o%?1(??????}S??d?D???>?Qa??N?,?????$?o6??8lX????0[???#;?? ?`??????0?:? ?????`A/M4?P??*/?l#Fo?ue??????$?Z?n???)F?t?c?U?g????2?G???????vt8c +??/?? !D??????v?MC +???M?,?????g????hw?z?}r???o??F?R??:Z????k?iqQ^yG???Q;??5?|???U7??PO?A?H?LV?6?3??8??C??x???S?????_ ?Z?{9??x????*????z?](??9}???? +???u???c???VVw?????????? +7??y?????{??=?n??C???????l??f~???lG???_????n???????[?/;??=??G???U&p#P/??2 +??aL ?????????*???????5???bV???/?p;M?qeQ???'?7u????0O??e?????`H?+?d??J{?WD??^??V?Z???@q??=????????h???????????????B?l?????k?*V??d?????V?????? C?je?N?L?X?n??_??L??> +stream +x??Y?oG??_aUE=????=*Q? ?Z U +>]?3v??? ??????9s) **???vwvv?7?3???????H?????N??"KunG????ad6z??t?;?hm\?r?,Cw=?:9???????q??* ???O??fZ?nT?m????????ds1?d&?+??t\???&??C3??????E1*?23H?F?R?K??Y ????&?l1&N??^"?? +7yr??,?C?J6?N?7 +A `??M?,????A%?'%?6?_?&5???F?U???+?? ? d?Nm@?_?c????p?5?56?3P_??_?vq??'?S!f?R??[E?R';S^??K^?e?9?W??^o???????G??o??OB??y/?Z'C;g ,?????]4??X? +=????????IK??u?????w(nY??????;????#??2L+6B?g???fq!?????x`?q9 +??MH?`z$??wi?\?x?`??y??8^ag?,C?????Sqx??>K*`{?'g>x??@9>P? ?`?X?????r?????>t?????O?u??ly???0?????????t????1?O? ??c6? ?{??u?s~{??Y_?lS?=????a)????o???V?m?&?X???9?????????M?o?MR?)m)C?|j??^X??j??e?C???^????9:???^?'&K6h 4??,?v????j?]???????????4v?tK???6"~??S???wvt:?k?t<>???fQ7"(4???????$???>_a?y????"i??H5??,???/~s?6?y?6N??E?;?d,?94??O%???????5???< g?*F?^?3??????2???\??w??9????;???&? +?i????ci? +D?` ?z????u?]??h??Ca??l? +a?f?zH?X???f? ?? +??s?J0?M??ts?=??;???X??!?Fa???)?k??e? ?w&??+,X??Q???I ????B???i?a??i????z?; +-lo??,N?,hQ09?????T???s??"?U^?> +stream +x??Ym??6??_a( ?bF|?[??k?zwH??? ?-??gY?,'M~??%?????|(?~??"???3????]={?:? ??u?Y\my?tfY?????Y???Jk?D?Y?????]?>???^?sX?z??????PEjR?/V&SY????;??+????\?4??S??????8`?? ]?5?@*?6?oS?8?'?Y??C .7??:???:z!???6??r?L?N??lx?????7?`?5??r$?d??8?{??? ?? +N??Q?[Yp+?{?J?}?eaU??_`?C{=b??g +?& +???jj5?T??1??8??tH?w????????}?^%v???48????P"??E?? 76??Pd}y??i?????K???=g?d??g?????&????y(?~???{?Zh??_??9K?Ab?s?9s?'y?$ +?p?????{x?????K=T&??3 ??Jp<1hY??=w\?@?n?A?u????0???bS????6s0?&p??"??%M??P\(?{g6$?R??0oK??N???.,?Lb?L?U+pWrcB?????I o&?14?N??hU$????u*???D????????q?ZZ??5????????Xy??V??Q?? ?P?|?@??&?nh?g?8)?xHG????????? + ??????U???????5 +?[???Z?; ?U?.lI1?V?t>??x????O??wa????z?????p@??p?j?????????????9?sJY??}???????^T????8??????r?kOv,?y???? R?;Y???|t`?M9?} +?O2??W????)ypC5? ???1q??X?-?I??k?.h????,?????y?M?j#j?d3?R??????????? +? Ji<?1w??B?8x<%??LW_???{?%k???@sq?;??w?Oi????pa9u?b?5,???????VS??U?GE?6?>??????B???4?=??Jp????[e?????)9????$$? T?a?OE????$|O??y?cm?`?.X4> +stream +x??YYo7~????5j?K.??Qh?H?????%1??e?+iUKv????????????0????A??????wi??D??7???X?$?%F +-Uo0?}???` +4????*?@g'}??@??!q????v?h?z?#???_??ob;???y^????T?p?J?7?*?C??3w+?Z3q??'*??7?'*??q<Q???lcnYTP.???????????;?1?;?x?0E?^Ua?~?e +??????y???? +2?L?Yqe 32?"RQ?????YoXY? )Lu???????? +?,?I?F\??Pd?????C#????#h 6???Sc?g?dA?WG??Wk??sT???I ?b? +GI?E??S ?f+???????=??.>?d??4?8aP?? \k??5X??c???d?K3w????(-"?Qy|??u??? ?8??S????:e?????#$*>`??^?9Bf.p8????19x?a?? +?ZD?q?E?}?U????O?4v\3???z?n?:?-?l?;zzGg?????-? ???XO?W +?N+?/? iz?.?M_>? ?????9VA;p?_???)??3jaWn?tJN?mQSOr?U!??^>v?F??ub?d?9??)?DZ?s??5??@????\I7?g?w???8??????v????Hs?4???geL=-?[??h?????6 8??? ?&c?":y?2???o??L]??O?vYm???{?s???xR?? ?1X?-i???C?)?0#q9??v?e???w???a?Q??m?m?C??> +stream +x??XIo?F??WE?J?5?,?$@?)?Ck???hI??J?#?V???or??5}i?y|???-3o?'????Q??????2?S?3e?VN?h:?>?>???cc???? +????$1v?`???B?? ??|??/?W1L?XF???????M?????fW??`?? +?u?F??????4^???Oce??????Eq4??*??.c?ObL-?????y`?LX?#??Q?M,??c?3? ?pO`????C???0????t?,???4?L?W???d???Mq- +??9?????{!????k?"??????n?2???G?JB??72*Q?O????$U??m[xxgA?e$??c???E+???-:?>??N??^}Z????u0??s?V?Uc?3??$??W?_???09?>????Q???+ ?C????s??)j?v9???$??$???? ????e????????D?.??*I?up????????c?o> 0u???8????P???~???@?$S?$m??????????+I?*S??aZ?????l(tc???W???G;}?Z?? ??naO???a5???:?????q??????JZ)??CR?JS????? Q?p>?? ???h??{j? Z?4h?,V;???; ?t??0?a~:?? ???2i +?K  +J???vL??;??*?4?6n??]QR?:?????r}?6?QC-??`???Mr?????C?^??X??gT??p???K??6| {4|?in~????y?=?r??#? +?E???#?RC?=??|?#,???L?? +?e??c?k?6??????????gA?"ht?ls??????3?K +K?l?+????:????*Yk l??Z?{??\???????w???]?????9C??C??M??va???????????ZUy????6EU?J?\????? ?.??@???9????H?Va??H??q??_JH????F?m T?3j?@??8??f?? +??.??P2?f0O ?????mb???f\$ d??b?p??CUd?>????$????cG<??????????K???6?[k????K?&??mu?t???????????7M??s4?q%??+H'l?????X??? +endstream +endobj +51 0 obj << +/Length 1330 +/Filter /FlateDecode +>> +stream +x??XK??6???0*??*R???? ?^ +??) +??????\????/9?H?d9k[>EQ?h8?f??????Oo?`??y"???q?%?H?A ?r0? ?y?G? ??Px???_?}?2Q?w?!?H%???t????o?`0???a?j?Tvd?^_??+?5S???_?????[C??s?V?[??{%???R?tu{??N?GL??C ???X???F?%?U????5F??n???????%bF%??4C>?| ????Px???b?>a?k????#???Y???W2?`?????5????? ?%`Z`n?a'V?P?|+?????0/?"?=h?? +J;V????O?%??/??d?Z??B????????? [?"??(?X?I?%?f?E^???04 < e???:?W=?????#???:aS??ji??z6????? ????u?:??b??W.?uSYj?\t)]&?VlK?M ?@x7??S????!y??????&??A`??Pl?? F????a?L?dT?m???E?? ????Z i??#??{??:?i?/???????2<H?-??? ??D;=??R?E???%??g?< +?T?SJ????~`???3????(k6;?3l7??[ga7?24<f?? ++vp??????6}?_1?<?\R?g?????|?'????? H?M?u??????-???$h?>????FN?o???w`??|?Y???|?vG0+??sNB???^?B?=k??H?_!?Z`l???*nC?/?Z?????;???{? T?%??b?f$?o(?P?M???k?????D??S??????#C???????%?*???E??lM??%?Z-?V?O??Na??Q??????? ??O??92?Zt???;R???1_NL??_?[?????,&?,T.?te??L?r???????n??Z?X???ds??O???`??;???????p??^>?p??????????G??/?0=j????u???I?6 +?+????? +??????P??? +endstream +endobj +54 0 obj << +/Length 1907 +/Filter /FlateDecode +>> +stream +x??XYo?F~??????^<?@??GR???<%AAK??FUJ????w??a?????5??????o???????Y?EkG????w??>?@???x*?/D? ?=??|????Gnr)?k??????#?????c?8?????????8)?K?OX??p],?qS????w????'ethc?ktU?q/?`J??+x?ei?n??t??vG. u?????C?A?9+??????N???`???n???n#N?KA??o+???mq???*?+???`+q?,??B6?&??=?????l?Q??AF??b???Rn??IP??[?? +???2!?~nA??BZ?????\???yll?=p??E?E7bt?[???W?j$?Z(z'?]?+y?z???????r ?6????T??#???????????$??????????i3,O-jj??-?E;x????????"?H\4??o +.~$Z#(?8?~??e?%?u{???I????A}?/??Z??%V???$?5?.???y%?w???L?+!?????A?AV??E!?U?????u2??KBc?p?N???C??x|?*??????Q???3?#??????>?4?h?????????)??br?o~!?????S\?l???!q??1?6???WGl??????NIR???&Dj??&?`?(??????#??r??Fi&U???}??0P? +i?X/??M?t?.lQ?? ?? --x??K?vag??s,?1MW?S???&????;M??7??@?3*'??}^*:??:;????d]?#?":n????~??H?????????3?? ??,x?9lfBFA????6?]??:??Q?W??????~%?~???U???!?fYd?Z?%??????z???bj?0??T?> +stream +x??XY??F ~???S!??v????$? ?l?p6 ??(?^?hl???l??_?C??c??Kdi???????}r???y?+?"5i?f?+?X???9'??n?z?E?}?+???O?7Y??G??sx???H +???=gI]??;???QZ??s?????D?.u??U???????\??Tp6??8wyO?c?@??gEStt??}??x|9?s?QO?x?.???tCsNA?|"??;! &???q??$?FB>%?????KL?8?|??x?'o?3?){~?rN?!?yE26?(?}??? ???( ????~U?????V???????P$yJ?@c??(\??s??f?J?D???I;0???D?5T?$ ? Lz,??t?rP????????? `?V?R?? +?N?yl????1?^?N4 0??W??Ou8!%?A?? ?(?(30?q"??$???/??/@?m3+n??q?,l??g??V$??W??E???8I'X??c??HP???s???J?U?5???-??l7^a??G???n??.?:?K???????p6???X??R??G??ZM?>h?????? ?? +pn???B?E?1,????? T?q??)?6??|???????h+D?~???? %?8?*??%K??(f?-???E?nfe?"???]K??/?;??EP6> +stream +x?31?3?T0P02Q02W06U05RH1?*?24PA#S?Tr.??'?~?????P?K??W???4?K?)?Y??K?E!?P? ???E??????o?B??@? +?a??'W $o&| +endstream +endobj +62 0 obj << +/Length 104 +/Filter /FlateDecode +>> +stream +x?31?3?T0P04W0#S#?C?B.?????)T&9????K?\??K?(?????PRT??????`????m?`????`?P???7?3`?v(P?????+? L5* +endstream +endobj +67 0 obj << +/Length 149 +/Filter /FlateDecode +>> +stream +x?31?35R0P0Bc3cs?C?B.c46K$?r9yr??+p?{E??=}J?JS???? ??]? + b?<]?00????????? +???????????@????0??????`d=0s at f???? d'?n.WO?@.?sud +endstream +endobj +88 0 obj << +/Length1 1807 +/Length2 12523 +/Length3 0 +/Length 13660 +/Filter /FlateDecode +>> +stream +x???P??? +????????%?; 0??n??'????w??n ????gw?????-??????w????j???`S???????? ?(???`aagbaaC??T???+G??:9???|?b?;M\?d&.oDE?=@????`??c??ca??????v?H?????L9?=??R?????ry???O?-??????Os??? dfbP4q???E43????@@??qA#`??????????db??v??e??\??@g????G?%;?_?1!R??@??Q??-\?M???7?-? h??f?jot?E??*>:??CV???Ws?L?????G ???M???v&?? {K???(??????0?7??hb? ~?7q3?????L? %?0y???????@.?L? ??jd???[?%????vv@{g????9???????pm??????E {s???0wu`??9?e%??????Y]?,,,?\<?#?af??GuO??J???o5?z;?oe}A?????&n@???+??????E???s?? ?h ?G????h??6'?@??m?X,???e??a?`{[????YTBSME?????V???=???\F6N++??????????V??T??Wv??(ko??????????6??????o%??>4???> '??????#??????????u??oFR????i?C??????l=?b??????m(??.???R???9h1?????????]??????m9K???%?V??;??& +u??lK?\?,?<{7??}nIPi}?y2?S??mE\?x???,Z?G?@??.??????????C?2???U9???W????ty4tnWe??K??t???F???????Yt??it???????~???p#???????t????dRZ6?b?|I?????j9???C?_?????^s? +o???????????"y??*?=?*?$??>t?Z +?5Tc_?M????mO????'4r? +f?e?C=-???K^g???s&????Py0?>!%??F?~?h?-^V9?Y???$??"p?~0Q7??I}R??? !?s_ck%o?q????|?T?1?d??G???q?P?K??9t?????????"?\???l?$??c?`?????????oZ?????n5????k???m??pBl?j????????*????K??????")?????;g???[?%?WNz?I?I?c??G?}??#y?:?? ? +t?5?? ?K$,o?4??????L??{h???W?V? ??#JhN?*???????d???w?5?????n?#N?z<d?iJ???57<{?????Wa??????w??Ule1?????q????[???????M +8??v=???T +??9???~?p%Y?"^(???p??[?Oh{Zb???uS???L?W???4??? ?k4????E6???-{????l#?\3??kV{?U!????F~?kb?? +lm?|?&~??R???|{?,$?!?*bWI?`L?f??\?-???&?????????r???`?\pV???I$??k?_h??-F????SM????$?+?4???5??U??'??????`%?gY?;u?A??k?U??????h?Jd-sg +?;?s???tE?i??8s?;}?RU?2i????4??J1?DM?d?m??#6?? +??{/`?1j????6?=??yy?ya?h???qlVHj,??[?[???!k?? ???? .?L?h????X?o#?`Q??PS?q??M?XS??????\Q? +Adx?G!???? ? +A~??~[????m ???dA?????6_PT8q???e?J?G?$C?????U?e?} ?jp?\u{?????e?8J??v^F???;@???m?;!|?E-????w?/~W? ???4F_L???????3??C|j???*?C?8?wQ_v? ??$c?aW??9?s?ooYB??dG??????? 1?mUdc?????p???<8h1?[?KR?T +??>F?/LC???n&Gj????5(Gm????????\l?}$?w???dTQ??I??j?W?Y??w??U??*??p????????9???j???U????? +??1??d??x??H??L??4??t?*?1?? +&?t*&*#?M???(??5?+????_CB,? K7c??9/??=J)fw???=??>S-???9??W?d???G?l?8?Ft ??^1???:?????:???????Z!?6?_ <???Y???T?,$???DW??i??????Dx}#?$k??X????w?U?'\-eZ?_?B?TxPN??)=l?>!?]U????Ym nH??????70??O??8i?Qm????"t???,-?#?8? S6?|F"?]| ??o?? +?r+ +??Oq?:M?7N?7N???q????t?3??I???V??Yr??????uWL??`??F(/?ca2-??mM??-"???x???9g;C??LV X???????\??O???/??C?A}?? +????J??????=?^??'A?w_???d?H+?y?/q??\>?z?t ??????????n3?S> ?????cu?|?N???p *[?A'??X?q%125?2Q?????? +?????Zy) ?Q +aTY?Z?O?v?????????Iz+????CE??q????}?CW9?????s9?7??{??????t?E???rJ????????BhP? +??/? ">?Z)??W??Ju? '???X??w>?>????????;T?$???|??e?nY&??&??? ???6?) ??:6N??????????J? +n)???G?D?,C?v?/?;+ ???w??aA?3??t"?5???%@? ?!????av?/????k???2????bA??1??k?re"????yR/'???t??= ???E)?f?~)<?"??]???y???l?)? y???.(??b +??"???z??yo=?????CO?!&????I4?h%???A???eTYp?????GOj77?????@3R?"??u?u???<`??^??[m?????+?l`H ??{?????>??O??????b 0???????HtL??[???\?Tw?}V^?ph?????fW Wo ?)??dB???]?}?DH?\?u7?2?]|???U?&3??2bu?MH?LZ??'IyC?e??N??/j?L?}?YT>????g????2)64?'?r?^l?????eC?7?E??@??8?' 1a??E?F?T?>???#???d?.?????????[????a ?t??5?*LE??? 7?T[????b??x??{??#?%T ?? s?????'????8?\?????Mi??????9??????%[R????????3?m???!ow?/-h?O??8"???$?4???x;? +???N?bQ??H?O~???{N??(%????Ysk???M????K??h??????????i? +?4l? Vs?WaZB`DQ??K?c?|bQrV??^?;?S ?t?`?????4r{?BT?HV5?????????k?s?????r???}???2Ld"?1??g?/?M???`WD??Y??^??8\?8?`???&?z?Y2'??/q???@?z???f??Zn????????P???H??????M?b9?-?!z?A??h?L??a?N? +??U-o??\?v??+?{???X??$??7 +?o~?6??;1/??M???Z?$?^??c???cs?????g???Q!????]??!g????H3o&A???]??]?>?'???TQ;v?8O??z??;\d??? +??~?pv???7??[t?????m J???2?\]??C??????9??r?g???7`?????r?10+?2??TMV?????j????????=???EZ_???b??? ???t???b?CJT?rm?,?????b?M?&???t??:????]T??/"?m?????8??,???????? ?+??K? uhE???D?JB???????E???J???M?LIO|.????I>???Y(@?`H82$??S?????+"_m:??u?c?/?jRG????lD??I4???qeY???C??Wy?? +x.P????p????H ?&???i?FB?UD?u?(M???z at K??LU???H??ue +b?h???8?i:+ +?O;?r??t?u?G3??????????7a$?,&?I[? ?S????Y??Tr?? Mp??????9S????b?s?? *H???(7,{??_*U???)?%IH +????q`^???#?$???????? +?Sj???Z???/.??q????:e?{FJ????T????8?3?*?:???_?&??????=\???`g?N?1?J???#C?.,a}X?/??~sy?t??=Jeqr9Ol'?x?YO!#a??K?l???d?j????=1 ?W???? F??m?j??8? $1?1=?fGm?????B?`????BqSa?? [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2882 From noreply at r-forge.r-project.org Sun Aug 25 15:20:41 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 25 Aug 2013 15:20:41 +0200 (CEST) Subject: [Returnanalytics-commits] r2883 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R data man Message-ID: <20130825132041.5ACFD183A2A@r-forge.r-project.org> Author: pulkit Date: 2013-08-25 15:20:41 +0200 (Sun, 25 Aug 2013) New Revision: 2883 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/data/ret.csv pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd Log: documentation changes Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-25 13:20:41 UTC (rev 2883) @@ -10,6 +10,7 @@ export(EconomicDrawdown) export(EDDCOPS) export(golden_section) +export(MaxDD) export(MinTrackRecord) export(MonteSimulTriplePenance) export(MultiBetaDrawdown) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-25 13:20:41 UTC (rev 2883) @@ -39,6 +39,7 @@ #'of Florida,September 2012. #' #'@examples +#'data(edhec) #'MultiBetaDrawdown(cbind(edhec,edhec),cbind(edhec[,2],edhec[,2]),sample = 2,ps=c(0.4,0.6)) #'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 #'@export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-25 13:20:41 UTC (rev 2883) @@ -8,7 +8,7 @@ #' \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-EDD(t)}{1-EDD(t)}\biggr]\right\}} #' #' The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. -#' +#'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) #' #'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns @@ -30,9 +30,8 @@ #'@examples #' #' # with S&P 500 data and T-bill data -#' -#'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) -#'dt<-as.xts(dt) +#'data(ret) +#'dt<-as.xts(read.zoo(ret)) #'EDDCOPS(dt[,1],delta = 0.33,gamma = 0.7,Rf = (1+dt[,2])^(1/12)-1,geometric = TRUE) #' #'data(edhec) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R 2013-08-25 13:20:41 UTC (rev 2883) @@ -24,6 +24,7 @@ #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples +#'data(edhec) #'EconomicDrawdown(edhec,0.08,100) #' #' @export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-25 13:20:41 UTC (rev 2883) @@ -48,7 +48,7 @@ #' data(edhec) #' MaxDD(edhec,0.95,"ar") #' MaxDD(edhec[,1],0.95,"normal") #expected values 4.241799 6.618966 - +#'@export MaxDD<-function(R,confidence,type=c("ar","normal"),...) { Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-25 13:20:41 UTC (rev 2883) @@ -18,6 +18,7 @@ #' .\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} #' #'The portion of the risk free asset is \eqn{x_f = 1 - x_1 - x_2}. +#'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) #' #'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns @@ -39,8 +40,7 @@ #'@examples #' #' # with S&P 500 data and T-bill data -#' -#'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) +#'dt<-data(ret) #'dt<-as.xts(dt) #'REDDCOPS(dt[,1],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "one") #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/data/ret.csv =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/data/ret.csv 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/data/ret.csv 2013-08-25 13:20:41 UTC (rev 2883) @@ -1,4 +1,4 @@ -Index;S.P;barc[1:432, 2];X3.month.T.bill +;"S&P";"barclays";"3 month t-bill" 1976-01-31;0.1183057989;0.0109;0.0473 1976-02-29;-0.0114019433;0.0033;0.05 1976-03-31;0.0306889981;0.0214;0.0497 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-25 13:20:41 UTC (rev 2883) @@ -34,12 +34,12 @@ The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. + dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) } \examples{ # with S&P 500 data and T-bill data - -dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) -dt<-as.xts(dt) +data(ret) +dt<-as.xts(read.zoo(ret)) EDDCOPS(dt[,1],delta = 0.33,gamma = 0.7,Rf = (1+dt[,2])^(1/12)-1,geometric = TRUE) data(edhec) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd 2013-08-25 13:20:41 UTC (rev 2883) @@ -32,6 +32,7 @@ \code{\link{EconomicMax}} } \examples{ +data(edhec) EconomicDrawdown(edhec,0.08,100) } \author{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd 2013-08-25 13:20:41 UTC (rev 2883) @@ -57,6 +57,7 @@ the market performs well. } \examples{ +data(edhec) MultiBetaDrawdown(cbind(edhec,edhec),cbind(edhec[,2],edhec[,2]),sample = 2,ps=c(0.4,0.6)) BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 } Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-25 12:00:35 UTC (rev 2882) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-25 13:20:41 UTC (rev 2883) @@ -50,12 +50,12 @@ .\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} The portion of the risk free asset is \eqn{x_f = 1 - x_1 - - x_2}. + - x_2}. dt<-read.zoo("../data/ret.csv",sep=",",header = + TRUE) } \examples{ # with S&P 500 data and T-bill data - -dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) +dt<-data(ret) dt<-as.xts(dt) REDDCOPS(dt[,1],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "one") From noreply at r-forge.r-project.org Sun Aug 25 19:12:28 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 25 Aug 2013 19:12:28 +0200 (CEST) Subject: [Returnanalytics-commits] r2884 - pkg/PerformanceAnalytics/sandbox/pulkit/R Message-ID: <20130825171228.E35D61850E0@r-forge.r-project.org> Author: pulkit Date: 2013-08-25 19:12:28 +0200 (Sun, 25 Aug 2013) New Revision: 2884 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R Log: error in psropt Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-25 13:20:41 UTC (rev 2883) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-25 17:12:28 UTC (rev 2884) @@ -12,8 +12,6 @@ #' #'This optimal vector is invariant of the value adopted by the parameter \eqn{SR^\**}. #'Gradient Ascent Logic is used to compute the weights using the Function PsrPortfolio - - #'@aliases PsrPortfolio #' #'@param R The return series @@ -185,7 +183,7 @@ x_mat = as.matrix(na.omit(x)) sum = 0 - output = .Call("sums",mat = x_mat,index,mean,dOrder,weights,mOrder,sum) + output = .Call("sums",mat = x_mat,index,mean,dOrder,weights,mOrder,sum,PACKAGE="noniid.pm") #for(i in 1:n){ # x1 = 0 # x2 = (x_mat[i,index]-mean[index])^dOrder @@ -213,7 +211,7 @@ get_Moments<-function(series,order,mean = 0){ sum = 0 mat = as.matrix(series) - sum = .Call("sums_m",mat,mean,order) + sum = .Call("sums_m",mat,mean,order,PACKAGE="noniid.pm") # for(i in series){ # sum = sum + (i-mean)^order # } From noreply at r-forge.r-project.org Sun Aug 25 19:49:32 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sun, 25 Aug 2013 19:49:32 +0200 (CEST) Subject: [Returnanalytics-commits] r2885 - pkg/PortfolioAnalytics/R Message-ID: <20130825174932.BEC5F185586@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-25 19:49:32 +0200 (Sun, 25 Aug 2013) New Revision: 2885 Modified: pkg/PortfolioAnalytics/R/charts.DE.R pkg/PortfolioAnalytics/R/charts.GenSA.R pkg/PortfolioAnalytics/R/charts.PSO.R pkg/PortfolioAnalytics/R/charts.ROI.R pkg/PortfolioAnalytics/R/charts.RP.R Log: Modified chart.Weights.* functions to plot negative weights and handle Inf or -Inf in box constraints. Modified: pkg/PortfolioAnalytics/R/charts.DE.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-25 17:12:28 UTC (rev 2884) +++ pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-25 17:49:32 UTC (rev 2885) @@ -38,9 +38,20 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) - points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) - points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + if(any(is.infinite(constraints$max)) | any(is.infinite(constraints$min))){ + # set ylim based on weights if box constraints contain Inf or -Inf + ylim <- range(object$weights) + } else { + # set ylim based on the range of box constraints min and max + ylim <- range(c(constraints$min, constraints$max)) + } + plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=ylim, ylab="Weights", main=main, pch=16, ...) + if(!any(is.infinite(constraints$min))){ + points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) + } + if(!any(is.infinite(constraints$max))){ + points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + } # if(!is.null(neighbors)){ # if(is.vector(neighbors)){ # xtract=extractStats(object) Modified: pkg/PortfolioAnalytics/R/charts.GenSA.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-25 17:12:28 UTC (rev 2884) +++ pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-25 17:49:32 UTC (rev 2885) @@ -27,9 +27,20 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) - points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) - points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + if(any(is.infinite(constraints$max)) | any(is.infinite(constraints$min))){ + # set ylim based on weights if box constraints contain Inf or -Inf + ylim <- range(object$weights) + } else { + # set ylim based on the range of box constraints min and max + ylim <- range(c(constraints$min, constraints$max)) + } + plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=ylim, ylab="Weights", main=main, pch=16, ...) + if(!any(is.infinite(constraints$min))){ + points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) + } + if(!any(is.infinite(constraints$max))){ + points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + } # if(!is.null(neighbors)){ # if(is.vector(neighbors)){ # xtract=extractStats(ROI) Modified: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-25 17:12:28 UTC (rev 2884) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-25 17:49:32 UTC (rev 2885) @@ -27,9 +27,20 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) - points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) - points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + if(any(is.infinite(constraints$max)) | any(is.infinite(constraints$min))){ + # set ylim based on weights if box constraints contain Inf or -Inf + ylim <- range(object$weights) + } else { + # set ylim based on the range of box constraints min and max + ylim <- range(c(constraints$min, constraints$max)) + } + plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=ylim, ylab="Weights", main=main, pch=16, ...) + if(!any(is.infinite(constraints$min))){ + points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) + } + if(!any(is.infinite(constraints$max))){ + points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + } # if(!is.null(neighbors)){ # if(is.vector(neighbors)){ # xtract=extractStats(ROI) Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-25 17:12:28 UTC (rev 2884) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-25 17:49:32 UTC (rev 2885) @@ -27,9 +27,20 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, pch=16, ...) - points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) - points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + if(any(is.infinite(constraints$max)) | any(is.infinite(constraints$min))){ + # set ylim based on weights if box constraints contain Inf or -Inf + ylim <- range(object$weights) + } else { + # set ylim based on the range of box constraints min and max + ylim <- range(c(constraints$min, constraints$max)) + } + plot(object$weights, type="b", col="blue", axes=FALSE, xlab='', ylim=ylim, ylab="Weights", main=main, pch=16, ...) + if(!any(is.infinite(constraints$min))){ + points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) + } + if(!any(is.infinite(constraints$max))){ + points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + } # if(!is.null(neighbors)){ # if(is.vector(neighbors)){ # xtract=extractStats(object) Modified: pkg/PortfolioAnalytics/R/charts.RP.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-25 17:12:28 UTC (rev 2884) +++ pkg/PortfolioAnalytics/R/charts.RP.R 2013-08-25 17:49:32 UTC (rev 2885) @@ -41,9 +41,20 @@ bottommargin = minmargin } par(mar = c(bottommargin, 4, topmargin, 2) +.1) - plot(object$random_portfolios[1,], type="b", col="orange", axes=FALSE, xlab='', ylim=c(0,max(constraints$max)), ylab="Weights", main=main, ...) - points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) - points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + if(any(is.infinite(constraints$max)) | any(is.infinite(constraints$min))){ + # set ylim based on weights if box constraints contain Inf or -Inf + ylim <- range(object$weights) + } else { + # set ylim based on the range of box constraints min and max + ylim <- range(c(constraints$min, constraints$max)) + } + plot(object$random_portfolios[1,], type="b", col="orange", axes=FALSE, xlab='', ylim=ylim, ylab="Weights", main=main, ...) + if(!any(is.infinite(constraints$min))){ + points(constraints$min, type="b", col="darkgray", lty="solid", lwd=2, pch=24) + } + if(!any(is.infinite(constraints$max))){ + points(constraints$max, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + } if(!is.null(neighbors)){ if(is.vector(neighbors)){ xtract=extractStats(object) From noreply at r-forge.r-project.org Mon Aug 26 00:53:38 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 00:53:38 +0200 (CEST) Subject: [Returnanalytics-commits] r2886 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man vignettes Message-ID: <20130825225338.5F479184FD6@r-forge.r-project.org> Author: pulkit Date: 2013-08-26 00:53:38 +0200 (Mon, 26 Aug 2013) New Revision: 2886 Added: pkg/PerformanceAnalytics/sandbox/pulkit/R/na.skip.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/rollDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw Log: error corrections Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-25 22:53:38 UTC (rev 2886) @@ -46,3 +46,4 @@ 'table.PSR.R' 'TriplePenance.R' 'TuW.R' + 'na.skip.R' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-25 22:53:38 UTC (rev 2886) @@ -5,15 +5,22 @@ export(CdarMultiPath) export(chart.BenchmarkSR) export(chart.Penance) +export(chart.REDD) export(chart.SRIndifference) +export(dd_norm) +export(diff_Q) export(DrawdownGPD) export(EconomicDrawdown) export(EDDCOPS) +export(get_minq) +export(getQ) +export(get_TuW) export(golden_section) export(MaxDD) export(MinTrackRecord) export(MonteSimulTriplePenance) export(MultiBetaDrawdown) +export(na.skip) export(ProbSharpeRatio) export(PsrPortfolio) export(REDDCOPS) @@ -22,3 +29,5 @@ export(table.Penance) export(table.PSR) export(TuW) +export(tuw_norm) +useDynLib(noniid.pm) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -40,7 +40,7 @@ #'@seealso \code{\link{plot}} #'@keywords ts multivariate distribution models hplot #'@examples -#' +#'data(edhec) #'chart.BenchmarkSR(edhec,vs="strategies") #'chart.BenchmarkSR(edhec,vs="sharpe") #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -14,6 +14,7 @@ #'Gradient Ascent Logic is used to compute the weights using the Function PsrPortfolio #'@aliases PsrPortfolio #' +#'@useDynLib noniid.pm #'@param R The return series #'@param refSR The benchmark Sharpe Ratio #'@param bounds The bounds for the weights @@ -31,7 +32,7 @@ #'@examples #' #'data(edhec) -#'PsrPortfolio(edhec) +#'PsrPortfolio(edhec) #'@export PsrPortfolio<-function(R,refSR=0,bounds=NULL,MaxIter = 1000,delta = 0.005){ @@ -183,7 +184,7 @@ x_mat = as.matrix(na.omit(x)) sum = 0 - output = .Call("sums",mat = x_mat,index,mean,dOrder,weights,mOrder,sum,PACKAGE="noniid.pm") + output = .Call("sums",mat = x_mat,index,mean,dOrder,weights,mOrder,sum) #for(i in 1:n){ # x1 = 0 # x2 = (x_mat[i,index]-mean[index])^dOrder @@ -211,7 +212,7 @@ get_Moments<-function(series,order,mean = 0){ sum = 0 mat = as.matrix(series) - sum = .Call("sums_m",mat,mean,order,PACKAGE="noniid.pm") + sum = .Call("sums_m",mat,mean,order) # for(i in series){ # sum = sum + (i-mean)^order # } Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -41,14 +41,13 @@ #' #' # with S&P 500 data and T-bill data #'dt<-data(ret) -#'dt<-as.xts(dt) +#'dt<-as.xts(read.zoo(ret)) #'REDDCOPS(dt[,1],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "one") #' #' #' # with S&P 500 , barclays and T-bill data -#' -#'dt<-read.zoo("ret.csv",sep=";",header = TRUE) -#'dt<-as.xts(dt) +#'data(ret) +#'dt<-as.xts(read.zoo(ret)) #'REDDCOPS(dt[,1:2],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "two") #' #'data(edhec) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -24,7 +24,9 @@ #'@seealso \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} #'\code{\link{rollDrawdown}} \code{\link{REDDCOPS}} \code{\link{EDDCOPS}} #'@examples +#'data(edhec) #'rollEconomicMax(edhec,0.08,100) + #'@export #' rollEconomicMax<-function(R,Rf,h,geometric = TRUE,...){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -13,7 +13,7 @@ ## REFERENCE: ## Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs ## and the ?Triple Penance? Rule(January 1, 2013). - +#'@export dd_norm<-function(x,confidence){ # DESCRIPTION: # A function to return the maximum drawdown for a normal distribution @@ -30,6 +30,7 @@ return(c(dd*100,t)) } +#'@export tuw_norm<-function(x,confidence){ # DESCRIPTION: # A function to return the Time under water @@ -46,7 +47,7 @@ - +#'@export get_minq<-function(R,confidence){ # DESCRIPTION: @@ -74,7 +75,7 @@ return(c(-minQ$value*100,minQ$x)) } - +#'@export getQ<-function(bets,phi,mu,sigma,dp0,confidence){ # DESCRIPTION: @@ -99,7 +100,10 @@ var = var*((phi^(2*(bets+1))-1)/(phi^2-1)-2*(phi^(bets+1)-1)/(phi-1)+bets +1) q_value = mu_new + qnorm(1-confidence)*(var^0.5) return(q_value) + } + +#'@export get_TuW<-function(R,confidence){ # DESCRIPTION: @@ -130,6 +134,7 @@ +#'@export diff_Q<-function(bets,phi,mu,sigma,dp0,confidence){ # DESCRIPTION: Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -31,6 +31,7 @@ #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). #' #' @examples +#' data(edhec) #' TuW(edhec,0.95,"ar") #' TuW(edhec[,1],0.95,"normal") # expected value 103.2573 #'@export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -35,6 +35,7 @@ #'@seealso \code{\link{plot}} \code{\link{table.Penance}} \code{\link{MaxDD}} \code{\link{TuW}} #'@keywords ts multivariate distribution models hplot #'@examples +#'data(edhec) #'chart.Penance(edhec,0.95) #' #'@references Bailey, David H. and Lopez de Prado, Marcos,Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -18,8 +18,10 @@ #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples +#'data(edhec) #'chart.REDD(edhec,0.08,20) #' +#'@export chart.REDD<-function(R,rf,h, geometric = TRUE,legend.loc = NULL, colorset = (1:12),...) { Added: pkg/PerformanceAnalytics/sandbox/pulkit/R/na.skip.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/na.skip.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/na.skip.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -0,0 +1,46 @@ +#'@export +na.skip <- function (x, FUN=NULL, ...) # maybe add a trim capability? +{ # @author Brian Peterson + + # DESCRIPTION: + + # Time series data often contains NA's, either due to missing days, + # noncontiguous series, or merging multiple series, + # + # Some Calulcations, such as return calculations, require data that + # looks like a vector, and needs the output of na.omit + # + # It is often convenient to apply these vector-like functions, but + # you still need to keep track of the structure of the oridginal data. + + # Inputs + # x the time series to apply FUN too + # FUN function to apply + # ... any additonal parameters to FUN + + # Outputs: + # An xts time series that has the same index and NA's as the data + # passed in, after applying FUN + + nx <- na.omit(x) + fx <- FUN(nx, ... = ...) + if (is.vector(fx)) { + result <- .xts(fx, .index(x), .indexCLASS = indexClass(x)) + } + else { + result <- merge(fx, .xts(, .index(x))) + } + return(result) +} + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: na.skip.R 1855 2012-01-15 12:57:58Z braverock $ +# +############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/redd.R 2013-08-25 22:53:38 UTC (rev 2886) @@ -25,6 +25,7 @@ #'@references Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to #'Control Maximum Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012) #'@examples +#'data(edhec) #'rollDrawdown(edhec,0.08,100) #' #' @export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-25 22:53:38 UTC (rev 2886) @@ -56,14 +56,13 @@ \examples{ # with S&P 500 data and T-bill data dt<-data(ret) -dt<-as.xts(dt) +dt<-as.xts(read.zoo(ret)) REDDCOPS(dt[,1],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "one") # with S&P 500 , barclays and T-bill data - -dt<-read.zoo("ret.csv",sep=";",header = TRUE) -dt<-as.xts(dt) +data(ret) +dt<-as.xts(read.zoo(ret)) REDDCOPS(dt[,1:2],delta = 0.33,Rf = (1+dt[,3])^(1/12)-1,h = 12,geometric = TRUE,asset = "two") data(edhec) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd 2013-08-25 22:53:38 UTC (rev 2886) @@ -44,6 +44,7 @@ auto-regressive form. } \examples{ +data(edhec) TuW(edhec,0.95,"ar") TuW(edhec[,1],0.95,"normal") # expected value 103.2573 } Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd 2013-08-25 22:53:38 UTC (rev 2886) @@ -68,6 +68,7 @@ \sum_{t=s+1}^{S} \rho_{S,t}}{S(S-1)}} } \examples{ +data(edhec) chart.BenchmarkSR(edhec,vs="strategies") chart.BenchmarkSR(edhec,vs="sharpe") } Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd 2013-08-25 22:53:38 UTC (rev 2886) @@ -65,6 +65,7 @@ water. } \examples{ +data(edhec) chart.Penance(edhec,0.95) } \author{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/rollDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/rollDrawdown.Rd 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/rollDrawdown.Rd 2013-08-25 22:53:38 UTC (rev 2886) @@ -35,6 +35,7 @@ \code{\link{rollEconomicMax}} } \examples{ +data(edhec) rollDrawdown(edhec,0.08,100) } \author{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd 2013-08-25 22:53:38 UTC (rev 2886) @@ -36,6 +36,7 @@ \eqn{i^{th}} discrete time interval \eqn{{\triangle}t}. } \examples{ +data(edhec) rollEconomicMax(edhec,0.08,100) } \author{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw 2013-08-25 22:53:38 UTC (rev 2886) @@ -44,25 +44,10 @@ <>= library(PerformanceAnalytics) data(edhec) +library(noniid.pm) @ -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R") -@ - -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R") -@ - -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R") -@ - -<>= -dyn.load("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/src/moment.so") -@ - \section{Probabilistic Sharpe Ratio} Given a predefined benchmark Sharpe ratio $SR^\ast$ , the observed Sharpe ratio $\hat{SR}$ can be expressed in probabilistic terms as Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw 2013-08-25 22:53:38 UTC (rev 2886) @@ -34,34 +34,9 @@ <>= library(PerformanceAnalytics) data(edhec) +library(noniid.pm) @ -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/R/na.skip.R") -@ - - -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/redd.R") -@ - -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/Edd.R") -@ - -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/REM.R") -@ - - -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R") -@ - - -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R") -@ \section{ Rolling Economic Max } Rolling Economic Max at time t, looking back at portfolio Wealth history for a rolling window of length H is given by: Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw 2013-08-25 22:53:38 UTC (rev 2886) @@ -34,15 +34,9 @@ <>= library(PerformanceAnalytics) data(edhec) +library(noniid.pm) @ - -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkSR.R") -@ -<>= -source("/home/pulkit/workspace/GSOC/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R") -@ \section{Benchmark Sharpe Ratio} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw 2013-08-25 17:49:32 UTC (rev 2885) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw 2013-08-25 22:53:38 UTC (rev 2886) @@ -34,24 +34,8 @@ <>= library(PerformanceAnalytics) data(edhec) +library(noniid.pm) @ - -<>= -source("../R/MaxDD.R") -@ - -<>= -source("../R/TriplePenance.R") -@ - -<>= -source("../R/GoldenSection.R") -@ - - -<>= -source("../R/TuW.R") -@ \section{ Maximum Drawdown } Maximum Drawdown tells us Up to how much could a particular strategy lose with a given confidence level ?. This function calculated Maximum Drawdown for two underlying processes normal and autoregressive. For a normal process Maximum Drawdown is given by the formula From noreply at r-forge.r-project.org Mon Aug 26 01:28:51 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 01:28:51 +0200 (CEST) Subject: [Returnanalytics-commits] r2887 - pkg/PortfolioAnalytics/R Message-ID: <20130825232851.1C8DA1854E8@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 01:28:50 +0200 (Mon, 26 Aug 2013) New Revision: 2887 Modified: pkg/PortfolioAnalytics/R/constrained_objective.R Log: Adding var in the switch statment that calls StdDev. Removing var as its own function in the switch statement in constrained_objective. Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-25 22:53:38 UTC (rev 2886) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-25 23:28:50 UTC (rev 2887) @@ -529,12 +529,10 @@ nargs$x <- ( R %*% w ) #do the multivariate mean/median with Kroneker product }, sd =, + var =, StdDev = { fun = match.fun(StdDev) }, - var = { - fun = match.fun(var.portfolio) - }, mVaR =, VaR = { fun = match.fun(VaR) From noreply at r-forge.r-project.org Mon Aug 26 02:47:56 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 02:47:56 +0200 (CEST) Subject: [Returnanalytics-commits] r2888 - pkg/PortfolioAnalytics/R Message-ID: <20130826004756.27CED18568B@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 02:47:55 +0200 (Mon, 26 Aug 2013) New Revision: 2888 Modified: pkg/PortfolioAnalytics/R/constrained_objective.R Log: Making 'StdDev' the name of the objective measure if the objective is 'var' in constrained_objective so it does not have to be changed in the generics or extract stats methods. Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-25 23:28:50 UTC (rev 2887) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-26 00:47:55 UTC (rev 2888) @@ -587,8 +587,13 @@ tmp_measure <- try((do.call(fun,.formals)), silent=TRUE) if(isTRUE(trace) | isTRUE(storage)) { - if(is.null(names(tmp_measure))) names(tmp_measure) <- objective$name - tmp_return[[objective$name]] <- tmp_measure + # Subsitute 'StdDev' if the objective name is 'var' + # if the user passes in var as an objective name, we are actually + # calculating StdDev, so we need to change the name here. + tmp_objname <- objective$name + if(tmp_objname == "var") tmp_objname <- "StdDev" + if(is.null(names(tmp_measure))) names(tmp_measure) <- tmp_objname + tmp_return[[tmp_objname]] <- tmp_measure } if(inherits(tmp_measure, "try-error")) { From noreply at r-forge.r-project.org Mon Aug 26 03:04:29 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 03:04:29 +0200 (CEST) Subject: [Returnanalytics-commits] r2889 - pkg/PortfolioAnalytics/R Message-ID: <20130826010429.38BE11804C8@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 03:04:28 +0200 (Mon, 26 Aug 2013) New Revision: 2889 Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R Log: Adding an opt_values slot to the output of optimize.portfolio per Doug's recommendation. This is just a copy of objective_measures. Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-26 00:47:55 UTC (rev 2888) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-26 01:04:28 UTC (rev 2889) @@ -616,8 +616,8 @@ # is it necessary to normalize the weights here? # weights <- normalize_weights(weights) names(weights) <- colnames(R) - - out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=minw$optim$bestval, call=call) + obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures + out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=minw$optim$bestval, call=call) if (isTRUE(trace)){ out$DEoutput <- minw out$DEoptim_objective_results <- try(get('.objectivestorage',pos='.GlobalEnv'),silent=TRUE) @@ -663,7 +663,9 @@ } #' re-call constrained_objective on the best portfolio, as above in DEoptim, with trace=TRUE to get results for out list out$weights <- min_objective_weights - out$objective_measures <- try(constrained_objective(w=min_objective_weights, R=R, portfolio=portfolio, trace=TRUE, normalize=FALSE)$objective_measures) + obj_vals <- try(constrained_objective(w=min_objective_weights, R=R, portfolio=portfolio, trace=TRUE, normalize=FALSE)$objective_measures) + out$objective_measures <- obj_vals + out$opt_values <- obj_vals out$call <- call #' construct out list to be as similar as possible to DEoptim list, within reason @@ -703,7 +705,8 @@ # Maximize Quadratic Utility if var and mean are specified as objectives roi_result <- gmv_opt(R=R, constraints=constraints, moments=moments, lambda=lambda, target=target) weights <- roi_result$weights - out <- list(weights=weights, objective_measures=suppressWarnings(constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures), out=roi_result$out, call=call) + obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures + out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) } if(length(names(moments)) == 1 & "mean" %in% names(moments)) { # Maximize return if the only objective specified is mean @@ -711,12 +714,14 @@ # This is an MILP problem if max_pos is specified as a constraint roi_result <- maxret_milp_opt(R=R, constraints=constraints, moments=moments, target=target) weights <- roi_result$weights - out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=roi_result$out, call=call) + obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures + out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) } else { # Maximize return LP problem roi_result <- maxret_opt(R=R, constraints=constraints, moments=moments, target=target) weights <- roi_result$weights - out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=roi_result$out, call=call) + obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures + out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) } } if( any(c("CVaR", "ES", "ETL") %in% names(moments)) ) { @@ -725,12 +730,14 @@ # This is an MILP problem if max_pos is specified as a constraint roi_result <- etl_milp_opt(R=R, constraints=constraints, moments=moments, target=target, alpha=alpha) weights <- roi_result$weights - out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=roi_result$out, call=call) + obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures + out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) } else { # Minimize sample ETL/ES/CVaR LP Problem roi_result <- etl_opt(R=R, constraints=constraints, moments=moments, target=target, alpha=alpha) weights <- roi_result$weights - out <- list(weights=weights, objective_measures=constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures, out=roi_result$out, call=call) + obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures + out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) } } } ## end case for ROI @@ -771,9 +778,10 @@ weights <- as.vector( minw$par) weights <- normalize_weights(weights) names(weights) <- colnames(R) - + obj_vals <- constrained_objective(w=weights, R=R, portfolio=portfolio, trace=TRUE)$objective_measures out <- list(weights=weights, - objective_measures=constrained_objective(w=weights, R=R, portfolio=portfolio, trace=TRUE)$objective_measures, + objective_measures=obj_vals, + opt_values=obj_vals, out=minw$value, call=call) if (isTRUE(trace)){ @@ -815,9 +823,10 @@ weights <- as.vector(minw$par) weights <- normalize_weights(weights) names(weights) <- colnames(R) - + obj_vals <- constrained_objective(w=weights, R=R, portfolio=portfolio, trace=TRUE)$objective_measures out = list(weights=weights, - objective_measures=constrained_objective(w=weights, R=R, portfolio=portfolio, trace=TRUE)$objective_measures, + objective_measures=obj_vals, + opt_values=obj_vals, out=minw$value, call=call) if (isTRUE(trace)){ @@ -925,6 +934,7 @@ #' \itemize{ #' \item{\code{weights}:}{ The optimal set weights.} #' \item{\code{objective_measures}:}{ A list containing the value of each objective corresponding to the optimal weights.} +#' \item{\code{opt_values}:}{ A list containing the value of each objective corresponding to the optimal weights.} #' \item{\code{out}:}{ The output of the solver.} #' \item{\code{call}:}{ The function call.} #' \item{\code{portfolio}:}{ The portfolio object.} From noreply at r-forge.r-project.org Mon Aug 26 06:54:04 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 06:54:04 +0200 (CEST) Subject: [Returnanalytics-commits] r2890 - in pkg/PortfolioAnalytics: . R man sandbox Message-ID: <20130826045404.9384C18589E@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 06:54:04 +0200 (Mon, 26 Aug 2013) New Revision: 2890 Added: pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/R/extract.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd pkg/PortfolioAnalytics/man/create.EfficientFrontier.Rd pkg/PortfolioAnalytics/man/extractEfficientFrontier.Rd pkg/PortfolioAnalytics/man/optimize.portfolio.Rd pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R Log: Modifying efficient frontier code to plot efficient frontier in mean-sd space instead of mean-var space. Adding function to overlay multiple efficient frontiers of portfolios with different constraints. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-26 01:04:28 UTC (rev 2889) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-26 04:54:04 UTC (rev 2890) @@ -7,6 +7,7 @@ export(chart.EfficientFrontier.optimize.portfolio.ROI) export(chart.EfficientFrontier.optimize.portfolio) export(chart.EfficientFrontier) +export(chart.EfficientFrontierOverlay) export(chart.RiskReward.optimize.portfolio.DEoptim) export(chart.RiskReward.optimize.portfolio.GenSA) export(chart.RiskReward.optimize.portfolio.pso) Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-26 01:04:28 UTC (rev 2889) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-26 04:54:04 UTC (rev 2890) @@ -15,10 +15,10 @@ #' For objects created by optimize.portfolio with 'ROI' specified as the #' optimize_method: #' \itemize{ -#' \item The mean-var or mean-etl efficient frontier can be plotted for optimal +#' \item The mean-StdDev or mean-etl efficient frontier can be plotted for optimal #' portfolio objects created by \code{optimize.portfolio}. #' -#' \item If \code{match.col="var"}, the mean-variance efficient frontier is plotted. +#' \item If \code{match.col="StdDev"}, the mean-StdDev efficient frontier is plotted. #' #' \item If \code{match.col="ETL"} (also "ES" or "CVaR"), the mean-etl efficient frontier is plotted. #' } @@ -29,9 +29,10 @@ #' each iteration, therfore we cannot extract and chart an efficient frontier. #' #' @param object optimal portfolio created by \code{\link{optimize.portfolio}} -#' @param match.col string name of column to use for risk (horizontal axis). -#' \code{match.col} must match the name of an objective in the \code{portfolio} -#' object. +#' @param string name of column to use for risk (horizontal axis). +#' \code{match.col} must match the name of an objective measure in the +#' \code{objective_measures} or \code{opt_values} slot in the object created +#' by \code{\link{optimize.portfolio}}. #' @param n.portfolios number of portfolios to use to plot the efficient frontier #' @param xlim set the x-axis limit, same as in \code{\link{plot}} #' @param ylim set the y-axis limit, same as in \code{\link{plot}} @@ -56,21 +57,27 @@ wts <- object$weights objectclass <- class(object)[1] - objnames <- unlist(lapply(portf$objectives, function(x) x$name)) - if(!(match.col %in% objnames)){ - stop("match.col must match an objective name") - } + # objnames <- unlist(lapply(portf$objectives, function(x) x$name)) + # if(!(match.col %in% objnames)){ + # stop("match.col must match an objective name") + # } # get the optimal return and risk metrics xtract <- extractStats(object=object) - columnames <- colnames(xtract) + columnames <- names(xtract) if(!(("mean") %in% columnames)){ # we need to calculate the mean given the optimal weights opt_ret <- applyFUN(R=R, weights=wts, FUN="mean") } else { opt_ret <- xtract["mean"] } - opt_risk <- xtract[match.col] + # get the match.col column + mtc <- pmatch(match.col, columnames) + if(is.na(mtc)) { + mtc <- pmatch(paste(match.col,match.col,sep='.'), columnames) + } + if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + opt_risk <- xtract[mtc] # get the data to plot scatter of asset returns asset_ret <- scatterFUN(R=R, FUN="mean") @@ -80,7 +87,7 @@ if(match.col %in% c("ETL", "ES", "CVaR")){ frontier <- meanetl.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) } - if(match.col %in% objnames){ + if(match.col == "StdDev"){ frontier <- meanvar.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) } # data points to plot the frontier @@ -221,6 +228,22 @@ wts_idx <- grep(pattern="^w\\.", cnames) wts <- frontier[, wts_idx] + # return along the efficient frontier + # get the "mean" column + mean.mtc <- pmatch("mean", cnames) + if(is.na(mean.mtc)) { + mean.mtc <- pmatch("mean.mean", cnames) + } + if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") + + # risk along the efficient frontier + # get the match.col column + mtc <- pmatch(match.col, cnames) + if(is.na(mtc)) { + mtc <- pmatch(paste(match.col,match.col,sep='.'),cnames) + } + if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + # compute the weights for the barplot pos.weights <- +0.5 * (abs(wts) + wts) neg.weights <- -0.5 * (abs(wts) - wts) @@ -256,22 +279,7 @@ barplot(t(neg.weights), col = colorset, space = 0, add = TRUE, border = element.color, cex.axis=cex.axis, axes=FALSE, axisnames=FALSE, ...) - # return along the efficient frontier - # get the "mean" column - mean.mtc <- pmatch("mean", cnames) - if(is.na(mean.mtc)) { - mean.mtc <- pmatch("mean.mean", cnames) - } - if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") - # risk along the efficient frontier - # get the match.col column - mtc <- pmatch(match.col, cnames) - if(is.na(mtc)) { - mtc <- pmatch(paste(match.col,match.col,sep='.'),cnames) - } - if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") - # Add labels ef.return <- frontier[, mean.mtc] ef.risk <- frontier[, mtc] @@ -322,14 +330,14 @@ if(is.na(mean.mtc)) { mean.mtc <- pmatch("mean.mean", cnames) } - if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") + if(is.na(mean.mtc)) stop("could not match 'mean' with column name of efficient frontier") # get the match.col column mtc <- pmatch(match.col, cnames) if(is.na(mtc)) { mtc <- pmatch(paste(match.col,match.col,sep='.'),cnames) } - if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + if(is.na(mtc)) stop("could not match match.col with column name of efficient frontier") if(chart.assets){ # get the data to plot scatter of asset returns @@ -358,3 +366,78 @@ box(col = element.color) } +#' Plot multiple efficient frontiers +#' +#' Overlay the efficient frontiers of multiple portfolio objects on a single plot +#' +#' @param R an xts object of asset returns +#' @param portfolio_list list of portfolio objects created by \code{\link{portfolio.spec}} +#' @param type type of efficient frontier, see \code{\link{create.EfficientFrontier}} +#' @param n.portfolios number of portfolios to extract along the efficient frontier. +#' This is only used for objects of class \code{optimize.portfolio} +#' @param match.col match.col string name of column to use for risk (horizontal axis). +#' Must match the name of an objective. +#' @param seach_size passed to optimize.portfolio for type="DEoptim" or type="random" +#' @param main main title used in the plot. +#' @param cex.axis The magnification to be used for sizing the axis text relative to the current setting of 'cex', similar to \code{\link{plot}}. +#' @param element.color provides the color for drawing less-important chart elements, such as the box lines, axis lines, etc. +#' @param legend.loc location of the legend; NULL, "bottomright", "bottom", "bottomleft", "left", "topleft", "top", "topright", "right" and "center" +#' @param legend.labels character vector to use for the legend labels +#' @param cex.legend The magnification to be used for sizing the legend relative to the current setting of 'cex', similar to \code{\link{plot}}. +#' @param xlim set the x-axis limit, same as in \code{\link{plot}} +#' @param ylim set the y-axis limit, same as in \code{\link{plot}} +#' @param ... passthrough parameters to \code{\link{plot}} +#' @author Ross Bennett +#' @export +chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ...){ + # create multiple efficient frontier objects (one per portfolio in portfolio_list) + if(!is.list(portfolio_list)) stop("portfolio_list must be passed in as a list") + if(length(portfolio_list) == 1) warning("Only one portfolio object in portfolio_list") + # store in out + out <- list() + for(i in 1:length(portfolio_list)){ + if(!is.portfolio(portfolio_list[[i]])) stop("portfolio in portfolio_list must be of class 'portfolio'") + out[[i]] <- create.EfficientFrontier(R=R, portfolio=portfolio_list[[i]], type=type, n.portfolios=n.portfolios, match.col=match.col, search_size=search_size) + } + # get the data to plot scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN="mean") + asset_risk <- scatterFUN(R=R, FUN=match.col) + rnames <- colnames(R) + # plot the assets + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + axis(1, cex.axis = cex.axis, col = element.color) + axis(2, cex.axis = cex.axis, col = element.color) + box(col = element.color) + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret) + text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + + for(i in 1:length(out)){ + tmp <- out[[i]] + tmpfrontier <- tmp$frontier + cnames <- colnames(tmpfrontier) + + # get the "mean" column + mean.mtc <- pmatch("mean", cnames) + if(is.na(mean.mtc)) { + mean.mtc <- pmatch("mean.mean", cnames) + } + if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") + + # get the match.col column + mtc <- pmatch(match.col, cnames) + if(is.na(mtc)) { + mtc <- pmatch(paste(match.col, match.col, sep='.'),cnames) + } + if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + lines(x=tmpfrontier[, mtc], y=tmpfrontier[, mean.mtc], col=i, lty=i, lwd=2) + } + if(!is.null(legend.loc)){ + if(is.null(legend.labels)){ + legend.labels <- paste("Portfolio", 1:length(out), sep=".") + } + legend(legend.loc, legend=legend.labels, col=1:length(out), lty=1:length(out), lwd=2, cex=cex.legend, bty="n") + } + return(invisible(out)) +} + Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-26 01:04:28 UTC (rev 2889) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-26 04:54:04 UTC (rev 2890) @@ -255,21 +255,23 @@ #' #' @details currently there are 4 'types' supported to create an efficient frontier #' \itemize{ -#' \item{"mean-var":}{ This is a special case for an efficient frontier that -#' can be created by a QP solver. The \code{portfolio} object should have two +#' \item{"mean-var", "mean-sd", or "mean-StdDev":}{ This is a special case for +#' an efficient frontier that can be created by a QP solver. +#' The \code{portfolio} object should have two #' objectives: 1) mean and 2) var. The efficient frontier will be created via #' \code{\link{meanvar.efficient.frontier}}.} -#' \item{"mean-etl"}{ This is a special case for an efficient frontier that -#' can be created by an LP solver. The \code{portfolio} object should have two -#' objectives: 1) mean and 2) etl The efficient frontier will be created via +#' \item{"mean-ETL", "mean-ES", "mean-CVaR", "mean-etl"}{ This is a special +#' case for an efficient frontier that can be created by an LP solver. +#' The \code{portfolio} object should have two objectives: 1) mean +#' and 2) ETL/ES/CVaR. The efficient frontier will be created via #' \code{\link{meanetl.efficient.frontier}}.} #' \item{"DEoptim"}{ This can handle more complex constraints and objectives -#' than the simple mean-var and mean-etl cases. For this type, we actually +#' than the simple mean-var and mean-ETL cases. For this type, we actually #' call \code{\link{optimize.portfolio}} with \code{optimize_method="DEoptim"} #' and then extract the efficient frontier with #' \code{\link{extract.efficient.frontier}}.} #' \item{"random"}{ This can handle more complex constraints and objectives -#' than the simple mean-var and mean-etl cases. For this type, we actually +#' than the simple mean-var and mean-ETL cases. For this type, we actually #' call \code{\link{optimize.portfolio}} with \code{optimize_method="random"} #' and then extract the efficient frontier with #' \code{\link{extract.efficient.frontier}}.} @@ -291,17 +293,22 @@ #' \code{\link{meanetl.efficient.frontier}}, #' \code{\link{extract.efficient.frontier}} #' @export -create.EfficientFrontier <- function(R, portfolio, type=c("mean-var", "mean-etl", "random", "DEoptim"), n.portfolios=25, match.col="ES", search_size=2000, ...){ +create.EfficientFrontier <- function(R, portfolio, type, n.portfolios=25, match.col="ES", search_size=2000, ...){ # This is just a wrapper around a few functions to easily create efficient frontiers # given a portfolio object and other parameters if(!is.portfolio(portfolio)) stop("portfolio must be of class 'portfolio'") type <- type[1] switch(type, + "mean-sd"=, + "mean-StdDev"=, "mean-var" = {frontier <- meanvar.efficient.frontier(portfolio=portfolio, R=R, n.portfolios=n.portfolios) }, + "mean-ETL"=, + "mean-CVaR"=, + "mean-ES"=, "mean-etl" = {frontier <- meanetl.efficient.frontier(portfolio=portfolio, R=R, n.portfolios=n.portfolios) @@ -341,8 +348,9 @@ #' created via \code{meanetl.efficient.frontier}. #' #' If the object is an \code{optimize.portfolio.ROI} object and \code{match.col} -#' is "var", then the mean-var efficient frontier will be created via -#' \code{meanvar.efficient.frontier}. +#' is "StdDev", then the mean-StdDev efficient frontier will be created via +#' \code{meanvar.efficient.frontier}. Note that if 'var' is specified as the +#' name of an objective, the value returned will be 'StdDev'. #' #' For objects created by \code{optimize.portfolo} with the DEoptim, random, or #' pso solvers, the efficient frontier will be extracted from the object via @@ -351,8 +359,9 @@ #' #' @param object an optimal portfolio object created by \code{optimize.portfolio} #' @param match.col string name of column to use for risk (horizontal axis). -#' \code{match.col} must match the name of an objective in the \code{portfolio} -#' object. +#' \code{match.col} must match the name of an objective measure in the +#' \code{objective_measures} or \code{opt_values} slot in the object created +#' by \code{\link{optimize.portfolio}}. #' @param n.portfolios number of portfolios to use to plot the efficient frontier #' @return an \code{efficient.frontier} object with weights and other metrics along the efficient frontier #' @author Ross Bennett @@ -372,17 +381,17 @@ if(is.null(R)) stop(paste("Not able to get asset returns from", object, "run optimize.portfolio with trace=TRUE")) # get the objective names and check if match.col is an objective name - objnames <- unlist(lapply(portf$objectives, function(x) x$name)) - if(!(match.col %in% objnames)){ - stop("match.col must match an objective name") - } + # objnames <- unlist(lapply(portf$objectives, function(x) x$name)) + # if(!(match.col %in% objnames)){ + # stop("match.col must match an objective name") + # } # We need to create the efficient frontier if object is of class optimize.portfolio.ROI if(inherits(object, "optimize.portfolio.ROI")){ if(match.col %in% c("ETL", "ES", "CVaR")){ frontier <- meanetl.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) } - if(match.col == "var"){ + if(match.col == "StdDev"){ frontier <- meanvar.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) } } # end optimize.portfolio.ROI Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-26 01:04:28 UTC (rev 2889) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-26 04:54:04 UTC (rev 2890) @@ -32,9 +32,11 @@ \item{object}{optimal portfolio created by \code{\link{optimize.portfolio}}} - \item{match.col}{string name of column to use for risk - (horizontal axis). \code{match.col} must match the name - of an objective in the \code{portfolio} object.} + \item{string}{name of column to use for risk (horizontal + axis). \code{match.col} must match the name of an + objective measure in the \code{objective_measures} or + \code{opt_values} slot in the object created by + \code{\link{optimize.portfolio}}.} \item{n.portfolios}{number of portfolios to use to plot the efficient frontier} @@ -68,11 +70,11 @@ For objects created by optimize.portfolio with 'ROI' specified as the optimize_method: \itemize{ \item The - mean-var or mean-etl efficient frontier can be plotted + mean-StdDev or mean-etl efficient frontier can be plotted for optimal portfolio objects created by \code{optimize.portfolio}. - \item If \code{match.col="var"}, the mean-variance + \item If \code{match.col="StdDev"}, the mean-StdDev efficient frontier is plotted. \item If \code{match.col="ETL"} (also "ES" or "CVaR"), Added: pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-26 04:54:04 UTC (rev 2890) @@ -0,0 +1,68 @@ +\name{chart.EfficientFrontierOverlay} +\alias{chart.EfficientFrontierOverlay} +\title{Plot multiple efficient frontiers} +\usage{ + chart.EfficientFrontierOverlay(R, portfolio_list, type, + n.portfolios = 25, match.col = "ES", + search_size = 2000, main = "Efficient Frontiers", + cex.axis = 0.8, element.color = "darkgray", + legend.loc = NULL, legend.labels = NULL, + cex.legend = 0.8, xlim = NULL, ylim = NULL, ...) +} +\arguments{ + \item{R}{an xts object of asset returns} + + \item{portfolio_list}{list of portfolio objects created + by \code{\link{portfolio.spec}}} + + \item{type}{type of efficient frontier, see + \code{\link{create.EfficientFrontier}}} + + \item{n.portfolios}{number of portfolios to extract along + the efficient frontier. This is only used for objects of + class \code{optimize.portfolio}} + + \item{match.col}{match.col string name of column to use + for risk (horizontal axis). Must match the name of an + objective.} + + \item{seach_size}{passed to optimize.portfolio for + type="DEoptim" or type="random"} + + \item{main}{main title used in the plot.} + + \item{cex.axis}{The magnification to be used for sizing + the axis text relative to the current setting of 'cex', + similar to \code{\link{plot}}.} + + \item{element.color}{provides the color for drawing + less-important chart elements, such as the box lines, + axis lines, etc.} + + \item{legend.loc}{location of the legend; NULL, + "bottomright", "bottom", "bottomleft", "left", "topleft", + "top", "topright", "right" and "center"} + + \item{legend.labels}{character vector to use for the + legend labels} + + \item{cex.legend}{The magnification to be used for sizing + the legend relative to the current setting of 'cex', + similar to \code{\link{plot}}.} + + \item{xlim}{set the x-axis limit, same as in + \code{\link{plot}}} + + \item{ylim}{set the y-axis limit, same as in + \code{\link{plot}}} + + \item{...}{passthrough parameters to \code{\link{plot}}} +} +\description{ + Overlay the efficient frontiers of multiple portfolio + objects on a single plot +} +\author{ + Ross Bennett +} + Modified: pkg/PortfolioAnalytics/man/create.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/create.EfficientFrontier.Rd 2013-08-26 01:04:28 UTC (rev 2889) +++ pkg/PortfolioAnalytics/man/create.EfficientFrontier.Rd 2013-08-26 04:54:04 UTC (rev 2890) @@ -2,8 +2,7 @@ \alias{create.EfficientFrontier} \title{create an efficient frontier} \usage{ - create.EfficientFrontier(R, portfolio, - type = c("mean-var", "mean-etl", "random", "DEoptim"), + create.EfficientFrontier(R, portfolio, type, n.portfolios = 25, match.col = "ES", search_size = 2000, ...) } @@ -39,26 +38,27 @@ } \details{ currently there are 4 'types' supported to create an - efficient frontier \itemize{ \item{"mean-var":}{ This is - a special case for an efficient frontier that can be - created by a QP solver. The \code{portfolio} object - should have two objectives: 1) mean and 2) var. The - efficient frontier will be created via - \code{\link{meanvar.efficient.frontier}}.} - \item{"mean-etl"}{ This is a special case for an - efficient frontier that can be created by an LP solver. + efficient frontier \itemize{ \item{"mean-var", "mean-sd", + or "mean-StdDev":}{ This is a special case for an + efficient frontier that can be created by a QP solver. The \code{portfolio} object should have two objectives: - 1) mean and 2) etl The efficient frontier will be created - via \code{\link{meanetl.efficient.frontier}}.} + 1) mean and 2) var. The efficient frontier will be + created via \code{\link{meanvar.efficient.frontier}}.} + \item{"mean-ETL", "mean-ES", "mean-CVaR", "mean-etl"}{ + This is a special case for an efficient frontier that can + be created by an LP solver. The \code{portfolio} object + should have two objectives: 1) mean and 2) ETL/ES/CVaR. + The efficient frontier will be created via + \code{\link{meanetl.efficient.frontier}}.} \item{"DEoptim"}{ This can handle more complex constraints and objectives than the simple mean-var and - mean-etl cases. For this type, we actually call + mean-ETL cases. For this type, we actually call \code{\link{optimize.portfolio}} with \code{optimize_method="DEoptim"} and then extract the efficient frontier with \code{\link{extract.efficient.frontier}}.} \item{"random"}{ This can handle more complex constraints - and objectives than the simple mean-var and mean-etl + and objectives than the simple mean-var and mean-ETL cases. For this type, we actually call \code{\link{optimize.portfolio}} with \code{optimize_method="random"} and then extract the Modified: pkg/PortfolioAnalytics/man/extractEfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/extractEfficientFrontier.Rd 2013-08-26 01:04:28 UTC (rev 2889) +++ pkg/PortfolioAnalytics/man/extractEfficientFrontier.Rd 2013-08-26 04:54:04 UTC (rev 2890) @@ -11,7 +11,9 @@ \item{match.col}{string name of column to use for risk (horizontal axis). \code{match.col} must match the name - of an objective in the \code{portfolio} object.} + of an objective measure in the \code{objective_measures} + or \code{opt_values} slot in the object created by + \code{\link{optimize.portfolio}}.} \item{n.portfolios}{number of portfolios to use to plot the efficient frontier} @@ -31,9 +33,11 @@ \code{meanetl.efficient.frontier}. If the object is an \code{optimize.portfolio.ROI} object - and \code{match.col} is "var", then the mean-var + and \code{match.col} is "StdDev", then the mean-StdDev efficient frontier will be created via - \code{meanvar.efficient.frontier}. + \code{meanvar.efficient.frontier}. Note that if 'var' is + specified as the name of an objective, the value returned + will be 'StdDev'. For objects created by \code{optimize.portfolo} with the DEoptim, random, or pso solvers, the efficient frontier Modified: pkg/PortfolioAnalytics/man/optimize.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/optimize.portfolio.Rd 2013-08-26 01:04:28 UTC (rev 2889) +++ pkg/PortfolioAnalytics/man/optimize.portfolio.Rd 2013-08-26 04:54:04 UTC (rev 2890) @@ -68,6 +68,8 @@ \item{\code{weights}:}{ The optimal set weights.} \item{\code{objective_measures}:}{ A list containing the value of each objective corresponding to the optimal + weights.} \item{\code{opt_values}:}{ A list containing + the value of each objective corresponding to the optimal weights.} \item{\code{out}:}{ The output of the solver.} \item{\code{call}:}{ The function call.} \item{\code{portfolio}:}{ The portfolio object.} Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-26 01:04:28 UTC (rev 2889) +++ pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-26 04:54:04 UTC (rev 2890) @@ -36,22 +36,22 @@ # create efficient frontiers # mean-var efficient frontier -meanvar.ef <- create.EfficientFrontier(R=R, portfolio=meanvar.portf, type="mean-var") -chart.EfficientFrontier(meanvar.ef, match.col="var", type="b") -chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="var") +meanvar.ef <- create.EfficientFrontier(R=R, portfolio=meanvar.portf, type="mean-StdDev") +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="b") +chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="StdDev") # run optimize.portfolio and chart the efficient frontier for that object opt_meanvar <- optimize.portfolio(R=R, portfolio=meanvar.portf, optimize_method="ROI", trace=TRUE) -chart.EfficientFrontier(opt_meanvar, match.col="var", n.portfolios=50) +chart.EfficientFrontier(opt_meanvar, match.col="StdDev", n.portfolios=50) # The weights along the efficient frontier can be plotted by passing in the # optimize.portfolio output object -chart.Weights.EF(opt_meanvar, match.col="var") +chart.Weights.EF(opt_meanvar, match.col="StdDev") # or we can extract the efficient frontier and then plot it -ef <- extractEfficientFrontier(object=opt_meanvar, match.col="var", n.portfolios=15) +ef <- extractEfficientFrontier(object=opt_meanvar, match.col="StdDev", n.portfolios=15) chart.Weights.EF(ef, match.col="var", colorset=bluemono) # mean-etl efficient frontier -meanetl.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="mean-etl") +meanetl.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="mean-ES") chart.EfficientFrontier(meanetl.ef, match.col="ES", main="mean-ETL Efficient Frontier", type="l", col="blue") chart.Weights.EF(meanetl.ef, colorset=bluemono, match.col="ES") @@ -83,62 +83,10 @@ group.portf <- add.constraint(portfolio=group.portf, type="long_only") # optimize.portfolio(R=R, portfolio=group.portf, optimize_method="ROI") -foo <- function(R, portfolio_list, type, match.col="ES", main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ...){ - - # create multiple efficient frontier objects (one per portfolio in portfolio_list) - # store in out - out <- list() - for(i in 1:length(portfolio_list)){ - if(!is.portfolio(portfolio_list[[i]])) stop("portfolio in portfolio_list must be of class 'portfolio'") - out[[i]] <- create.EfficientFrontier(R=R, portfolio=portfolio_list[[i]], type=type) - } - # get the data to plot scatter of asset returns - asset_ret <- scatterFUN(R=R, FUN="mean") - asset_risk <- scatterFUN(R=R, FUN=match.col) - rnames <- colnames(R) - # plot the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) - axis(1, cex.axis = cex.axis, col = element.color) - axis(2, cex.axis = cex.axis, col = element.color) - box(col = element.color) - # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret) - text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) - - for(i in 1:length(out)){ - tmp <- out[[i]] - tmpfrontier <- tmp$frontier - cnames <- colnames(tmpfrontier) - - # get the "mean" column - mean.mtc <- pmatch("mean", cnames) - if(is.na(mean.mtc)) { - mean.mtc <- pmatch("mean.mean", cnames) - } - if(is.na(mean.mtc)) stop("could not match 'mean' with column name of extractStats output") - - # get the match.col column - mtc <- pmatch(match.col, cnames) - if(is.na(mtc)) { - mtc <- pmatch(paste(match.col, match.col, sep='.'),cnames) - } - if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") - lines(x=tmpfrontier[, mtc], y=tmpfrontier[, mean.mtc], col=i, lty=i, lwd=2) - } - if(!is.null(legend.loc)){ - if(is.null(legend.labels)){ - legend.labels <- paste("Portfolio", 1:length(out), sep=".") - } - legend(legend.loc, legend=legend.labels, col=1:length(out), lty=1:length(out), lwd=2, cex=cex.legend, bty="n") - } - return(invisible(out)) -} - portf.list <- list(lo.portf, box.portf, group.portf) legend.labels <- c("Long Only", "Box", "Group + Long Only") -foo(R=R, portfolio_list=portf.list, type="mean-var", match.col="var", - ylim=c(0.0055, 0.0085), xlim=c(0, 0.0025), legend.loc="bottomright", - legend.labels=legend.labels) +chart.EfficientFrontierOverlay(R=R, portfolio_list=portf.list, type="mean-StdDev", match.col="StdDev", + legend.loc="right", legend.labels=legend.labels) # TODO add the following methods for objects of class efficient.frontier # - print From noreply at r-forge.r-project.org Mon Aug 26 18:59:29 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 18:59:29 +0200 (CEST) Subject: [Returnanalytics-commits] r2891 - in pkg/PortfolioAnalytics: R man Message-ID: <20130826165929.B28C918524B@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 18:59:29 +0200 (Mon, 26 Aug 2013) New Revision: 2891 Modified: pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/man/group_constraint.Rd Log: Modifying group_constraint constructor to accept a list of vectors for multiple group levels instead of a vector for conitigous groups. Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-26 04:54:04 UTC (rev 2890) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-26 16:59:29 UTC (rev 2891) @@ -502,20 +502,45 @@ #' #' pspec <- portfolio.spec(assets=colnames(ret)) #' +#' # Assets 1 and 3 are groupA +#' # Assets 2 and 4 are groupB #' pspec <- add.constraint(portfolio=pspec, #' type="group", -#' groups=c(2, 2), -#' group_labels=c("Style A", "Style B"), +#' groups=list(groupA=c(1, 3), +#' groupB=c(2, 4)), #' group_min=c(0.15, 0.25), #' group_max=c(0.65, 0.55)) +#' +#' # 2 levels of grouping (e.g. by sector and geography) +#' pspec <- portfolio.spec(assets=5) +#' # Assets 1, 3, and 5 are Tech +#' # Assets 2 and 4 are Oil +#' # Assets 2, 4, and 5 are UK +#' # Assets 1 and are are US +#' group_list <- list(group1=c(1, 3, 5), +#' group2=c(2, 4), +#' groupA=c(2, 4, 5), +#' groupB=c(1, 3)) +#' +#' pspec <- add.constraint(portfolio=pspec, +#' type="group", +#' groups=group_list, +#' group_min=c(0.15, 0.25, 0.2, 0.1), +#' group_max=c(0.65, 0.55, 0.5, 0.4)) +#' #' @export group_constraint <- function(type="group", assets, groups, group_labels=NULL, group_min, group_max, group_pos=NULL, enabled=TRUE, message=FALSE, ...) { + if(!is.list(groups)) stop("groups must be passed in as a list") nassets <- length(assets) ngroups <- length(groups) + groupnames <- names(groups) - if(sum(groups) != nassets) { - stop("sum of groups must be equal to the number of assets") - } + # comment out so the user can pass in multiple levels of groups + # may want a warning message + # count <- sum(sapply(groups, length)) + # if(count != nassets) { + # message("count of assets in groups must be equal to the number of assets") + # } # Checks for group_min if (length(group_min) == 1) { @@ -531,8 +556,13 @@ } if (length(group_max) != ngroups) stop(paste("length of group_max must be equal to 1 or the length of groups:", ngroups)) + # construct the group_label vector if groups is a named list + if(!is.null(groupnames)){ + group_labels <- groupnames + } + # Construct the group_label vector if it is not passed in - if(is.null(group_labels)){ + if(is.null(group_labels) & is.null(groupnames)){ group_labels <- paste(rep("group", ngroups), 1:ngroups, sep="") } @@ -544,9 +574,9 @@ if(length(group_pos) != length(groups)) stop("length of group_pos must be equal to the length of groups") # Check for negative values in group_pos if(any(group_pos < 0)) stop("all elements of group_pos must be positive") - # Elements of group_pos cannot be greater than groups - if(any(group_pos > groups)){ - group_pos <- pmin(group_pos, groups) + # Elements of group_pos cannot be greater than count of assets in groups + if(any(group_pos > sapply(groups, length))){ + group_pos <- pmin(group_pos, sapply(groups, length)) } } Modified: pkg/PortfolioAnalytics/man/group_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/group_constraint.Rd 2013-08-26 04:54:04 UTC (rev 2890) +++ pkg/PortfolioAnalytics/man/group_constraint.Rd 2013-08-26 16:59:29 UTC (rev 2891) @@ -47,12 +47,31 @@ pspec <- portfolio.spec(assets=colnames(ret)) +# Assets 1 and 3 are groupA +# Assets 2 and 4 are groupB pspec <- add.constraint(portfolio=pspec, type="group", - groups=c(2, 2), - group_labels=c("Style A", "Style B"), + groups=list(groupA=c(1, 3), + groupB=c(2, 4)), group_min=c(0.15, 0.25), group_max=c(0.65, 0.55)) + +# 2 levels of grouping (e.g. by sector and geography) +pspec <- portfolio.spec(assets=5) +# Assets 1, 3, and 5 are Tech +# Assets 2 and 4 are Oil +# Assets 2, 4, and 5 are UK +# Assets 1 and are are US +group_list <- list(group1=c(1, 3, 5), + group2=c(2, 4), + groupA=c(2, 4, 5), + groupB=c(1, 3)) + +pspec <- add.constraint(portfolio=pspec, + type="group", + groups=group_list, + group_min=c(0.15, 0.25, 0.2, 0.1), + group_max=c(0.65, 0.55, 0.5, 0.4)) } \author{ Ross Bennett From noreply at r-forge.r-project.org Mon Aug 26 19:50:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 19:50:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2892 - pkg/PortfolioAnalytics/R Message-ID: <20130826175036.0A48D184289@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 19:50:35 +0200 (Mon, 26 Aug 2013) New Revision: 2892 Modified: pkg/PortfolioAnalytics/R/constraint_fn_map.R Log: modifying group_fail function to work with list of vectors to specify group constraints Modified: pkg/PortfolioAnalytics/R/constraint_fn_map.R =================================================================== --- pkg/PortfolioAnalytics/R/constraint_fn_map.R 2013-08-26 16:59:29 UTC (rev 2891) +++ pkg/PortfolioAnalytics/R/constraint_fn_map.R 2013-08-26 17:50:35 UTC (rev 2892) @@ -582,26 +582,19 @@ group_fail <- function(weights, groups, cLO, cUP, group_pos=NULL){ # return FALSE if groups, cLO, or cUP is NULL if(is.null(groups) | is.null(cLO) | is.null(cUP)) return(FALSE) - + group_count <- sapply(groups, length) # group_pos sets a limit on the number of non-zero weights by group # Set equal to groups if NULL - if(is.null(group_pos)) group_pos <- groups + if(is.null(group_pos)) group_pos <- group_count tolerance <- .Machine$double.eps^0.5 n.groups <- length(groups) group_fail <- vector(mode="logical", length=n.groups) - k <- 1 - l <- 0 + for(i in 1:n.groups){ - j <- groups[i] - tmp.w <- weights[k:(l+j)] - grp.min <- cLO[i] - grp.max <- cUP[i] - grp.pos <- group_pos[i] - # return TRUE if grp.min or grp.max is violated - group_fail[i] <- ( sum(tmp.w) < grp.min | sum(tmp.w) > grp.max | (sum(abs(tmp.w) > tolerance) > grp.pos)) - k <- k + j - l <- k - 1 + # sum of the weights for a given group + tmp.w <- weights[groups[[i]]] + group_fail[i] <- ( (sum(tmp.w) < cLO[i]) | (sum(tmp.w) > cUP[i]) | (sum(abs(tmp.w) > tolerance) > group_pos[i]) ) } # returns logical vector of groups. TRUE if either cLO or cUP is violated return(group_fail) From noreply at r-forge.r-project.org Mon Aug 26 20:12:59 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 20:12:59 +0200 (CEST) Subject: [Returnanalytics-commits] r2893 - pkg/PortfolioAnalytics/R Message-ID: <20130826181259.DE15C1849D8@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 20:12:59 +0200 (Mon, 26 Aug 2013) New Revision: 2893 Modified: pkg/PortfolioAnalytics/R/constrained_objective.R Log: modifying group constraint penalty block in constrained_objective to work with groups being specified as a list. Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-26 17:50:35 UTC (rev 2892) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-26 18:12:59 UTC (rev 2893) @@ -420,23 +420,15 @@ # Only go to penalty term if group constraint is violated if(any(group_fail(w, groups, cLO, cUP))){ ngroups <- length(groups) - k <- 1 - l <- 0 for(i in 1:ngroups){ - j <- groups[i] - tmp_w <- w[k:(l+j)] - # penalize weights for a given group that sum to less than specified group min - grp_min <- cLO[i] - if(sum(tmp_w) < grp_min) { - out <- out + penalty * (grp_min - sum(tmp_w)) + tmp_w <- w[groups[[i]]] + # penalize for weights that are below cLO + if(sum(tmp_w) < cLO[i]){ + out <- out + penalty * (cLO[i] - sum(tmp_w)) } - # penalize weights for a given group that sum to greater than specified group max - grp_max <- cUP[i] - if(sum(tmp_w) > grp_max) { - out <- out + penalty * (sum(tmp_w) - grp_max) + if(sum(tmp_w) > cUP[i]){ + out <- out + penalty * (sum(tmp_w) - cUP[i]) } - k <- k + j - l <- k - 1 } } } # End group constraint penalty From noreply at r-forge.r-project.org Mon Aug 26 20:23:58 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 20:23:58 +0200 (CEST) Subject: [Returnanalytics-commits] r2894 - pkg/PortfolioAnalytics/R Message-ID: <20130826182358.180681848DC@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 20:23:57 +0200 (Mon, 26 Aug 2013) New Revision: 2894 Modified: pkg/PortfolioAnalytics/R/optFUN.R Log: modifying functions in optFUN to construct the group matrix for Amat constraints matrix Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-26 18:12:59 UTC (rev 2893) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-26 18:23:57 UTC (rev 2894) @@ -29,13 +29,8 @@ if(try(!is.null(constraints$groups), silent=TRUE)){ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=N) - k <- 1 - l <- 0 for(i in 1:n.groups){ - j <- constraints$groups[i] - Amat.group[i, k:(l+j)] <- 1 - k <- l + j + 1 - l <- k - 1 + Amat.group[i, groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) @@ -94,13 +89,8 @@ if(try(!is.null(constraints$groups), silent=TRUE)){ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=N) - k <- 1 - l <- 0 for(i in 1:n.groups){ - j <- constraints$groups[i] - Amat.group[i, k:(l+j)] <- 1 - k <- l + j + 1 - l <- k - 1 + Amat.group[i, groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) @@ -184,13 +174,8 @@ if(try(!is.null(constraints$groups), silent=TRUE)){ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=N) - k <- 1 - l <- 0 for(i in 1:n.groups){ - j <- constraints$groups[i] - Amat.group[i, k:(l+j)] <- 1 - k <- l + j + 1 - l <- k - 1 + Amat.group[i, groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) @@ -257,13 +242,8 @@ if(try(!is.null(constraints$groups), silent=TRUE)){ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=N) - k <- 1 - l <- 0 for(i in 1:n.groups){ - j <- constraints$groups[i] - Amat.group[i, k:(l+j)] <- 1 - k <- l + j + 1 - l <- k - 1 + Amat.group[i, groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) @@ -362,13 +342,8 @@ if(try(!is.null(constraints$groups), silent=TRUE)){ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=m) - k <- 1 - l <- 0 for(i in 1:n.groups){ - j <- constraints$groups[i] - Amat.group[i, k:(l+j)] <- 1 - k <- l + j + 1 - l <- k - 1 + Amat.group[i, groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) From noreply at r-forge.r-project.org Mon Aug 26 20:32:27 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 20:32:27 +0200 (CEST) Subject: [Returnanalytics-commits] r2895 - pkg/PortfolioAnalytics/R Message-ID: <20130826183227.F0F8C1849D8@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 20:32:27 +0200 (Mon, 26 Aug 2013) New Revision: 2895 Modified: pkg/PortfolioAnalytics/R/generics.R Log: modifying groups section in summary method for optimize.portfolio objects Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-26 18:23:57 UTC (rev 2894) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-26 18:32:27 UTC (rev 2895) @@ -553,13 +553,8 @@ cat("Group Weights:\n") n.groups <- length(groups) group_weights <- rep(0, n.groups) - k <- 1 - l <- 0 for(i in 1:n.groups){ - j <- groups[i] - group_weights[i] <- sum(object$weights[k:(l+j)]) - k <- k + j - l <- k - 1 + group_weights[i] <- sum(weights[groups[[i]]]) } names(group_weights) <- group_labels print(group_weights) From noreply at r-forge.r-project.org Mon Aug 26 20:48:50 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 20:48:50 +0200 (CEST) Subject: [Returnanalytics-commits] r2896 - in pkg/PerformanceAnalytics/sandbox/pulkit: R man Message-ID: <20130826184850.876AE1853D9@r-forge.r-project.org> Author: pulkit Date: 2013-08-26 20:48:50 +0200 (Mon, 26 Aug 2013) New Revision: 2896 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.SRIndifference.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/golden_section.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/table.PSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/table.Penance.Rd Log: some changes in documentation and examples Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -25,11 +25,12 @@ #'@param cex set the cex value, as in \code{\link{plot}} #'@param cex.axis set the cex.axis value, as in \code{\link{plot}} #'@param cex.main set the cex.main value, as in \code{\link{plot}} +#'@param cex.lab set the cex.lab value, as in \code{\link{plot}} #'@param vs The values against which benchmark SR has to be plotted. can be #'"sharpe","correlation" or "strategies" #'@param ylim set the ylim value, as in \code{\link{plot}} #'@param xlim set the xlim value, as in \code{\link{plot}} -#' +#'@param \dots any other passthru variable #'@author Pulkit Mehrotra #'@seealso \code{\link{BenchmarkSR}} \code{\link{chart.SRIndifference}} \code{\link{plot}} #'@references Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/CDaRMultipath.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -22,8 +22,7 @@ #' #'@param R an xts, vector, matrix,data frame, timeSeries or zoo object of multiple sample path returns #'@param ps the probability for each sample path -#'@param scen the number of scenarios in the Return series -#'@param instr the number of instruments in the Return series +#'@param sample the number of samples in the Return series #'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining (FALSE) to aggregate returns, default TRUE #'@param p confidence level for calculation ,default(p=0.95) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -55,7 +55,7 @@ #'BetaDrawdown(edhec[,1],edhec[,2]) #' #'@export -BetaDrawdown<-function(R,Rm,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ +BetaDrawdown<-function(R,Rm,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ # DESCRIPTION: # Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -22,7 +22,9 @@ #' #'@param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns #'@param Rm Return series of the optimal portfolio an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#'@param sample The number of sample paths in the return series #'@param p confidence level for calculation ,default(p=0.95) +#'@param ps The probability for each sample path #'@param weights portfolio weighting vector, default NULL, see Details #' @param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE #' @param type The type of BetaDrawdown if specified alpha then the alpha value given is taken (default 0.95). If "average" then @@ -44,7 +46,7 @@ #'BetaDrawdown(edhec[,1],edhec[,2]) #expected value 0.5390431 #'@export -MultiBetaDrawdown<-function(R,Rm,sample,ps,h=0,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ +MultiBetaDrawdown<-function(R,Rm,sample,ps,p=0.95,weights=NULL,geometric=TRUE,type=c("alpha","average","max"),...){ # DESCRIPTION: # Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -16,8 +16,7 @@ #'@param gamma (1-gamma) is the investor risk aversion #'else the return series will be used #'@param Rf risk free rate can be vector such as government security rate of return. -#'@param h Look back period -#'@param geomtric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. +#'@param geometric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. #'@param ... any other variable #'@author Pulkit Mehrotra #'@seealso \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Edd.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -10,7 +10,7 @@ #' #'\deqn{EDD(t)=1-\frac{W_t}/{EM(t)}} #' -#'Here EM stands for Economic Max and is the code \code{\link{EconomicMax}} +#'Here EM stands for Economic Max. #' #' #'@param R an xts, vector, matrix, data frame, timeseries, or zoo object of asset return. Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/GoldenSection.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -18,8 +18,9 @@ #'@param b final point #'@param minimum TRUE to calculate the minimum and FALSE to calculate the Maximum #'@param function_name The name of the function +#'@param \dots any other passthru variable #'@author Pulkit Mehrotra -#' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). +#' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). #' #'@export golden_section<-function(a,b,minimum = TRUE,function_name,...){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MaxDD.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -39,9 +39,10 @@ #' @param R Returns #' @param confidence the confidence interval #' @param type The type of distribution "normal" or "ar"."ar" stands for Autoregressive. +#' @param \dots any other passthru variable #' @author Pulkit Mehrotra #' @seealso \code{\link{chart.Penance}} \code{\link{table.Penance}} \code{\link{TuW}} -#' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). +#' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). #' #' @examples #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MinTRL.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -1,7 +1,7 @@ #'@title Minimum Track Record Length #' #'@description -#'Minimum Track Record Length tells us ?How long should a track record be in +#'Minimum Track Record Length tells us "How long should a track record be in #'order to have statistical confidence that its Sharpe ratio is above a given #'threshold? ". If a track record is shorter than MinTRL, we do not have enough #'confidence that the observed Sharpe Ratio is above the designated threshold. @@ -37,6 +37,7 @@ #'To be given in case the return series is not given. #'@param kr Kurtosis, in the same periodicity as the returns(non-annualized). #'To be given in case the return series is not given. +#'@param \dots any other passthru variable #' #'@author Pulkit Mehrotra #'@seealso \code{\link{ProbSharpeRatio}} \code{\link{PsrPortfolio}} \code{\link{table.PSR}} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/MonteSimulTriplePenance.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -21,7 +21,7 @@ #' @param confidence Confidence level for quantile #' @author Pulkit Mehrotra #' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs -#' and the ?Triple Penance? Rule(January 1, 2013). +#' and the "Triple Penance" Rule(January 1, 2013). #' #' @examples #' MonteSimulTriplePenance(10^6,0.5,1,2,1,25,0.95) # Expected Value Quantile (Exact) = 6.781592 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -30,6 +30,7 @@ #' @param kr Kurtosis, in the same periodicity as the returns(non-annualized). #' To be given in case the return series is not given. #' @param n track record length. To be given in case the return series is not given. +#' @param \dots any other passthru variable #'@author Pulkit Mehrotra #'@seealso \code{\link{PsrPortfolio}} \code{\link{table.PSR}} \code{\link{MinTrackRecord}} #' @references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -27,8 +27,11 @@ #'else the return series will be used #'@param Rf risk free rate can be vector such as government security rate of return. #'@param h Look back period -#'@param geomtric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. -#'@param ... any other variable +#'@param geometric geometric utilize geometric chaining (TRUE) or simple/arithmetic +#'chaining(FALSE) to aggregate returns, default is TRUE. +#'@param \dots any other variable +#'@param asset The number of risky assets in the portfolio +#'@param type The type of portfolio optimization #' #'@author Pulkit Mehrotra #'@seealso \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} @@ -57,7 +60,7 @@ #'@export #' -REDDCOPS<-function(R ,delta,Rf,h,geometric = TRUE,asset = c("one","two","three"),type=c("calibrated","risk-based"),...){ +REDDCOPS<-function(R ,delta,Rf,h,geometric = TRUE,asset = c("one","two","three"),type=c("calibrated","risk-based"),sharpe = 0,...){ # DESCRIPTION # Calculates the dynamic weights for single and double risky asset portfolios # using Rolling Economic Drawdown Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -18,7 +18,7 @@ #'@param R R an xts, vector, matrix, data frame, timeseries, or zoo object of asset return. #'@param Rf risk free rate can be vector such as government security rate of return. #'@param h Look back period -#'@param geomtric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. +#'@param geometric geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE. #'@param ... any other variable #'@author Pulkit Mehrotra #'@seealso \code{\link{chart.REDD}} \code{\link{EconomicDrawdown}} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/SRIndifferenceCurve.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -2,7 +2,7 @@ #'Sharpe Ratio Indifference Curve #' #'@description -#'The trade-off between a candidate?s SR and its correlation +#'The trade-off between a candidate's SR and its correlation #'to the existing set of strategies, is given by the Sharpe #'ratio indifference curve. It is a plot between the candidate's #'Sharpe Ratio and candidate's average correlation for a given @@ -28,11 +28,12 @@ #'@param lwd set the width of the line, as in \code{\link{plot}} #'@param pch set the pch value, as in \code{\link{plot}} #'@param cex set the cex value, as in \code{\link{plot}} +#'@param cex.lab set the cex.lab value, as in \code{\link{plot}} #'@param cex.axis set the cex.axis value, as in \code{\link{plot}} #'@param cex.main set the cex.main value, as in \code{\link{plot}} #'@param ylim set the ylim value, as in \code{\link{plot}} #'@param xlim set the xlim value, as in \code{\link{plot}} -#' +#'@param \dots Any other passthru variable #'@author Pulkit Mehrotra #'@references #'Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/TuW.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -26,9 +26,10 @@ #' @param R return series #' @param confidence the confidence interval #' @param type The type of distribution "normal" or "ar"."ar" stands for Autoregressive. +#' @param \dots Any other passthru variable #' @author Pulkit Mehrotra #' @seealso \code{\link{chart.Penance}} \code{\link{MaxDD}} \code{\link{table.Penance}} -#' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). +#' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). #' #' @examples #' data(edhec) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.Penance.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -28,9 +28,10 @@ #'@param cex set the cex value, as in \code{\link{plot}} #'@param cex.axis set the cex.axis value, as in \code{\link{plot}} #'@param cex.main set the cex.main value, as in \code{\link{plot}} +#'@param cex.lab set the cex.lab value, as in \code{\link{plot}} #'@param ylim set the ylim value, as in \code{\link{plot}} #'@param xlim set the xlim value, as in \code{\link{plot}} -#' +#'@param \dots Any other pass thru variable #'@author Pulkit Mehrotra #'@seealso \code{\link{plot}} \code{\link{table.Penance}} \code{\link{MaxDD}} \code{\link{TuW}} #'@keywords ts multivariate distribution models hplot @@ -38,7 +39,7 @@ #'data(edhec) #'chart.Penance(edhec,0.95) #' -#'@references Bailey, David H. and Lopez de Prado, Marcos,Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). +#'@references Bailey, David H. and Lopez de Prado, Marcos,Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). #' #'@export Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/chart.REDD.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -6,7 +6,7 @@ #' For more details on rolling economic drawdown see \code{rollDrawdown}. #' #'@param R an xts, vector, matrix, data frame, timeseries, or zoo object of asset return. -#'@param Rf risk free rate can be vector such as government security rate of return +#'@param rf risk free rate can be vector such as government security rate of return #'@param h lookback period #'@param geometric utilize geometric chaining (TRUE) or simple/arithmetic chaining(FALSE) to aggregate returns, default is TRUE. #'@param legend.loc set the legend.loc, as in \code{\link{plot}} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/table.PSR.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -10,9 +10,9 @@ #'@param R the return series #'@param Rf the risk free rate of return #'@param refSR the reference Sharpe Ratio -#'@param the confidence level +#'@param p the confidence level #'@param weights the weights for the portfolio -#' +#'@param \dots Any other passthru variable #'@author Pulkit Mehrotra #'@seealso \code{\link{ProbSharpeRatio}} \code{\link{PsrPortfolio}} \code{\link{MinTrackRecord}} #'@references Bailey, David H. and Lopez de Prado, Marcos, \emph{The Sharpe Ratio Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/table.Penance.R 2013-08-26 18:48:50 UTC (rev 2896) @@ -11,7 +11,7 @@ #' #' @author Pulkit Mehrotra #' @seealso \code{\link{chart.Penance}} \code{\link{MaxDD}} \code{\link{TuW}} -#' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the ?Triple Penance? Rule(January 1, 2013). +#' @references Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). #' @export table.Penance<-function(R,confidence){ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -2,7 +2,7 @@ \alias{BetaDrawdown} \title{Drawdown Beta for single path} \usage{ - BetaDrawdown(R, Rm, h = 0, p = 0.95, weights = NULL, + BetaDrawdown(R, Rm, p = 0.95, weights = NULL, geometric = TRUE, type = c("alpha", "average", "max"), ...) } Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/CdarMultiPath.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -11,11 +11,8 @@ \item{ps}{the probability for each sample path} - \item{scen}{the number of scenarios in the Return series} + \item{sample}{the number of samples in the Return series} - \item{instr}{the number of instruments in the Return - series} - \item{geometric}{utilize geometric chaining (TRUE) or simple/arithmetic chaining (FALSE) to aggregate returns, default TRUE} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -16,9 +16,7 @@ \item{Rf}{risk free rate can be vector such as government security rate of return.} - \item{h}{Look back period} - - \item{geomtric}{geometric utilize geometric chaining + \item{geometric}{geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE.} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EconomicDrawdown.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -28,8 +28,7 @@ \deqn{EDD(t)=1-\frac{W_t}/{EM(t)}} - Here EM stands for Economic Max and is the code - \code{\link{EconomicMax}} + Here EM stands for Economic Max. } \examples{ data(edhec) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MaxDD.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -11,6 +11,8 @@ \item{type}{The type of distribution "normal" or "ar"."ar" stands for Autoregressive.} + + \item{\dots}{any other passthru variable} } \description{ \code{MaxDD} calculates the Maximum drawdown for a @@ -70,7 +72,7 @@ } \references{ Bailey, David H. and Lopez de Prado, Marcos, - Drawdown-Based Stop-Outs and the ?Triple Penance? + Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). } \seealso{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MinTrackRecord.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -31,9 +31,11 @@ \item{kr}{Kurtosis, in the same periodicity as the returns(non-annualized). To be given in case the return series is not given.} + + \item{\dots}{any other passthru variable} } \description{ - Minimum Track Record Length tells us ?How long should a + Minimum Track Record Length tells us "How long should a track record be in order to have statistical confidence that its Sharpe ratio is above a given threshold? ". If a track record is shorter than MinTRL, we do not have Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MonteSimulTriplePenance.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -43,7 +43,7 @@ } \references{ Bailey, David H. and Lopez de Prado, Marcos, - Drawdown-Based Stop-Outs and the ?Triple Penance? + Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). } Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -2,7 +2,7 @@ \alias{MultiBetaDrawdown} \title{Drawdown Beta for Multiple path} \usage{ - MultiBetaDrawdown(R, Rm, sample, ps, h = 0, p = 0.95, + MultiBetaDrawdown(R, Rm, sample, ps, p = 0.95, weights = NULL, geometric = TRUE, type = c("alpha", "average", "max"), ...) } @@ -14,9 +14,14 @@ vector, matrix, data frame, timeSeries or zoo object of asset returns} + \item{sample}{The number of sample paths in the return + series} + \item{p}{confidence level for calculation ,default(p=0.95)} + \item{ps}{The probability for each sample path} + \item{weights}{portfolio weighting vector, default NULL, see Details} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -35,6 +35,8 @@ \item{n}{track record length. To be given in case the return series is not given.} + + \item{\dots}{any other passthru variable} } \description{ Given a predefined benchmark Sharpe ratio ,the observed Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -4,7 +4,7 @@ \usage{ REDDCOPS(R, delta, Rf, h, geometric = TRUE, asset = c("one", "two", "three"), - type = c("calibrated", "risk-based"), ...) + type = c("calibrated", "risk-based"), sharpe = 0, ...) } \arguments{ \item{R}{an xts, vector, matrix, data frame, timeSeries @@ -20,11 +20,15 @@ \item{h}{Look back period} - \item{geomtric}{geometric utilize geometric chaining - (TRUE) or simple/arithmetic #'chaining(FALSE) to - aggregate returns, default is TRUE.} + \item{geometric}{geometric utilize geometric chaining + (TRUE) or simple/arithmetic chaining(FALSE) to aggregate + returns, default is TRUE.} - \item{...}{any other variable} + \item{\dots}{any other variable} + + \item{asset}{The number of risky assets in the portfolio} + + \item{type}{The type of portfolio optimization} } \description{ The Rolling Economic Drawdown Controlled Optimal Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/TuW.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -11,6 +11,8 @@ \item{type}{The type of distribution "normal" or "ar"."ar" stands for Autoregressive.} + + \item{\dots}{Any other passthru variable} } \description{ \code{TriplePenance} calculates the maximum Maximum Time @@ -53,7 +55,7 @@ } \references{ Bailey, David H. and Lopez de Prado, Marcos, - Drawdown-Based Stop-Outs and the ?Triple Penance? + Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). } \seealso{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -38,12 +38,17 @@ \item{cex.main}{set the cex.main value, as in \code{\link{plot}}} + \item{cex.lab}{set the cex.lab value, as in + \code{\link{plot}}} + \item{vs}{The values against which benchmark SR has to be plotted. can be "sharpe","correlation" or "strategies"} \item{ylim}{set the ylim value, as in \code{\link{plot}}} \item{xlim}{set the xlim value, as in \code{\link{plot}}} + + \item{\dots}{any other passthru variable} } \description{ Benchmark Sharpe Ratio Plots are used to give the Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.Penance.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -45,9 +45,14 @@ \item{cex.main}{set the cex.main value, as in \code{\link{plot}}} + \item{cex.lab}{set the cex.lab value, as in + \code{\link{plot}}} + \item{ylim}{set the ylim value, as in \code{\link{plot}}} \item{xlim}{set the xlim value, as in \code{\link{plot}}} + + \item{\dots}{Any other pass thru variable} } \description{ A plot for Penance vs phi for the given portfolio The @@ -73,7 +78,7 @@ } \references{ Bailey, David H. and Lopez de Prado, - Marcos,Drawdown-Based Stop-Outs and the ?Triple Penance? + Marcos,Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). } \seealso{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.SRIndifference.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.SRIndifference.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.SRIndifference.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -35,6 +35,9 @@ \item{cex}{set the cex value, as in \code{\link{plot}}} + \item{cex.lab}{set the cex.lab value, as in + \code{\link{plot}}} + \item{cex.axis}{set the cex.axis value, as in \code{\link{plot}}} @@ -44,9 +47,11 @@ \item{ylim}{set the ylim value, as in \code{\link{plot}}} \item{xlim}{set the xlim value, as in \code{\link{plot}}} + + \item{\dots}{Any other passthru variable} } \description{ - The trade-off between a candidate?s SR and its + The trade-off between a candidate's SR and its correlation to the existing set of strategies, is given by the Sharpe ratio indifference curve. It is a plot between the candidate's Sharpe Ratio and candidate's Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/golden_section.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/golden_section.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/golden_section.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -13,6 +13,8 @@ calculate the Maximum} \item{function_name}{The name of the function} + + \item{\dots}{any other passthru variable} } \description{ The Golden Section Search method is used to find the @@ -42,7 +44,7 @@ } \references{ Bailey, David H. and Lopez de Prado, Marcos, - Drawdown-Based Stop-Outs and the ?Triple Penance? + Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). } Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -13,7 +13,7 @@ \item{h}{Look back period} - \item{geomtric}{geometric utilize geometric chaining + \item{geometric}{geometric utilize geometric chaining (TRUE) or simple/arithmetic #'chaining(FALSE) to aggregate returns, default is TRUE.} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/table.PSR.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/table.PSR.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/table.PSR.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -12,9 +12,11 @@ \item{refSR}{the reference Sharpe Ratio} - \item{the}{confidence level} + \item{p}{the confidence level} \item{weights}{the weights for the portfolio} + + \item{\dots}{Any other passthru variable} } \description{ A table to display the Probabilistic Sharpe Ratio Along Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/table.Penance.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/table.Penance.Rd 2013-08-26 18:32:27 UTC (rev 2895) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/table.Penance.Rd 2013-08-26 18:48:50 UTC (rev 2896) @@ -21,7 +21,7 @@ } \references{ Bailey, David H. and Lopez de Prado, Marcos, - Drawdown-Based Stop-Outs and the ?Triple Penance? + Drawdown-Based Stop-Outs and the "Triple Penance" Rule(January 1, 2013). } \seealso{ From noreply at r-forge.r-project.org Mon Aug 26 20:49:30 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 20:49:30 +0200 (CEST) Subject: [Returnanalytics-commits] r2897 - pkg/PortfolioAnalytics/R Message-ID: <20130826184930.AE6F61853D9@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 20:49:30 +0200 (Mon, 26 Aug 2013) New Revision: 2897 Modified: pkg/PortfolioAnalytics/R/generics.R pkg/PortfolioAnalytics/R/optFUN.R Log: fixing error with group constraints in optFUN and summary method Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-26 18:48:50 UTC (rev 2896) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-26 18:49:30 UTC (rev 2897) @@ -554,7 +554,7 @@ n.groups <- length(groups) group_weights <- rep(0, n.groups) for(i in 1:n.groups){ - group_weights[i] <- sum(weights[groups[[i]]]) + group_weights[i] <- sum(object$weights[groups[[i]]]) } names(group_weights) <- group_labels print(group_weights) Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-26 18:48:50 UTC (rev 2896) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-26 18:49:30 UTC (rev 2897) @@ -30,7 +30,7 @@ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=N) for(i in 1:n.groups){ - Amat.group[i, groups[[i]]] <- 1 + Amat.group[i, constraints$groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) @@ -90,7 +90,7 @@ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=N) for(i in 1:n.groups){ - Amat.group[i, groups[[i]]] <- 1 + Amat.group[i, constraints$groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) @@ -175,7 +175,7 @@ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=N) for(i in 1:n.groups){ - Amat.group[i, groups[[i]]] <- 1 + Amat.group[i, constraints$groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) @@ -243,7 +243,7 @@ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=N) for(i in 1:n.groups){ - Amat.group[i, groups[[i]]] <- 1 + Amat.group[i, constraints$groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) @@ -343,7 +343,7 @@ n.groups <- length(constraints$groups) Amat.group <- matrix(0, nrow=n.groups, ncol=m) for(i in 1:n.groups){ - Amat.group[i, groups[[i]]] <- 1 + Amat.group[i, constraints$groups[[i]]] <- 1 } if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) From noreply at r-forge.r-project.org Mon Aug 26 21:17:35 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Mon, 26 Aug 2013 21:17:35 +0200 (CEST) Subject: [Returnanalytics-commits] r2898 - in pkg/PortfolioAnalytics: R sandbox Message-ID: <20130826191735.AB395184E23@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-26 21:17:35 +0200 (Mon, 26 Aug 2013) New Revision: 2898 Added: pkg/PortfolioAnalytics/sandbox/testing_group_multlevels.R Modified: pkg/PortfolioAnalytics/R/optFUN.R Log: Modifying maxret_opt to add a stop() if no solution is found. Adding test script for group constraints with multiple levels. Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-26 18:49:30 UTC (rev 2897) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-26 19:17:35 UTC (rev 2898) @@ -123,6 +123,7 @@ #non-zero value otherwise. if(roi.result$status$code != 0) { message(roi.result$status$msg$message) + stop("No solution") return(NULL) } Added: pkg/PortfolioAnalytics/sandbox/testing_group_multlevels.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_group_multlevels.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_group_multlevels.R 2013-08-26 19:17:35 UTC (rev 2898) @@ -0,0 +1,41 @@ + +library(PortfolioAnalytics) +library(ROI) +library(ROI.plugin.quadprog) +library(ROI.plugin.quadprog) + + +data(edhec) +R <- edhec[, 1:4] +colnames(R) <- c("CA", "CTAG", "DS", "EM") +funds <- colnames(R) + +# set up portfolio with objectives and constraints +pspec <- portfolio.spec(assets=funds) +pspec <- add.constraint(portfolio=pspec, type="full_investment") +pspec <- add.constraint(portfolio=pspec, type="long_only") +pspec <- add.constraint(portfolio=pspec, type="group", + groups=list(groupA=c(1, 3), + groupB=c(2, 4), + geoA=c(1, 2, 4), + geoB=3), + group_min=c(0.15, 0.25, 0.15, 0.2), + group_max=c(0.4, 0.7, 0.8, 0.62)) +pspec + +maxret <- add.objective(portfolio=pspec, type="return", name="mean") +opt_maxret <- optimize.portfolio(R=R, portfolio=maxret, optimize_method="ROI") +summary(opt_maxret) + +minvar <- add.objective(portfolio=pspec, type="risk", name="var") +opt_minvar <- optimize.portfolio(R=R, portfolio=minvar, optimize_method="ROI") +summary(opt_minvar) + +minetl <- add.objective(portfolio=pspec, type="risk", name="ETL") +opt_minetl <- optimize.portfolio(R=R, portfolio=minetl, optimize_method="ROI") +summary(opt_minetl) + +maxqu <- add.objective(portfolio=pspec, type="return", name="mean") +maxqu <- add.objective(portfolio=maxqu, type="risk", name="var", risk_aversion=0.25) +opt_maxqu <- optimize.portfolio(R=R, portfolio=maxqu, optimize_method="ROI") +summary(opt_maxqu) From noreply at r-forge.r-project.org Tue Aug 27 04:53:19 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 04:53:19 +0200 (CEST) Subject: [Returnanalytics-commits] r2899 - in pkg/PortfolioAnalytics: R man sandbox Message-ID: <20130827025320.03228185509@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-27 04:53:19 +0200 (Tue, 27 Aug 2013) New Revision: 2899 Added: pkg/PortfolioAnalytics/man/etl_milp_opt.Rd pkg/PortfolioAnalytics/man/etl_opt.Rd pkg/PortfolioAnalytics/man/gmv_opt.Rd pkg/PortfolioAnalytics/man/gmv_opt_toc.Rd pkg/PortfolioAnalytics/man/maxret_milp_opt.Rd pkg/PortfolioAnalytics/man/maxret_opt.Rd Modified: pkg/PortfolioAnalytics/R/optFUN.R pkg/PortfolioAnalytics/R/optimize.portfolio.R pkg/PortfolioAnalytics/sandbox/testing_turnover.gmv.R Log: Adding function for gmv/qu with turnover constraints. Adding documentation for optimization QP/LP optimization functions. Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-26 19:17:35 UTC (rev 2898) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-27 02:53:19 UTC (rev 2899) @@ -1,5 +1,15 @@ ##### GMV and QU QP Function ##### +#' Optimization function to solve minimum variance or maximum quadratic utility problems +#' +#' This function is called by optimize.portfolio to solve minimum variance or maximum quadratic utility problems +#' +#' @param R xts object of asset returns +#' @param constraints object of constraints in the portfolio object extracted with \code{get_constraints} +#' @param moments object of moments computed based on objective functions +#' @param lambda risk_aversion parameter +#' @param target target return value +#' @author Ross Bennett gmv_opt <- function(R, constraints, moments, lambda, target){ N <- ncol(R) @@ -66,6 +76,15 @@ } ##### Maximize Return LP Function ##### +#' Optimization function to solve minimum variance or maximum quadratic utility problems +#' +#' This function is called by optimize.portfolio to solve minimum variance or maximum quadratic utility problems +#' +#' @param R xts object of asset returns +#' @param constraints object of constraints in the portfolio object extracted with \code{get_constraints} +#' @param moments object of moments computed based on objective functions +#' @param target target return value +#' @author Ross Bennett maxret_opt <- function(R, moments, constraints, target){ N <- ncol(R) @@ -137,6 +156,15 @@ } ##### Maximize Return MILP Function ##### +#' Optimization function to solve minimum variance or maximum quadratic utility problems +#' +#' This function is called by optimize.portfolio to solve minimum variance or maximum quadratic utility problems +#' +#' @param R xts object of asset returns +#' @param constraints object of constraints in the portfolio object extracted with \code{get_constraints} +#' @param moments object of moments computed based on objective functions +#' @param target target return value +#' @author Ross Bennett maxret_milp_opt <- function(R, constraints, moments, target){ N <- ncol(R) @@ -226,6 +254,16 @@ } ##### Minimize ETL LP Function ##### +#' Optimization function to solve minimum variance or maximum quadratic utility problems +#' +#' This function is called by optimize.portfolio to solve minimum variance or maximum quadratic utility problems +#' +#' @param R xts object of asset returns +#' @param constraints object of constraints in the portfolio object extracted with \code{get_constraints} +#' @param moments object of moments computed based on objective functions +#' @param target target return value +#' @param alpha alpha value for ETL/ES/CVaR +#' @author Ross Bennett etl_opt <- function(R, constraints, moments, target, alpha){ N <- ncol(R) @@ -277,6 +315,16 @@ } ##### Minimize ETL MILP Function ##### +#' Optimization function to solve minimum variance or maximum quadratic utility problems +#' +#' This function is called by optimize.portfolio to solve minimum variance or maximum quadratic utility problems +#' +#' @param R xts object of asset returns +#' @param constraints object of constraints in the portfolio object extracted with \code{get_constraints} +#' @param moments object of moments computed based on objective functions +#' @param target target return value +#' @param alpha alpha value for ETL/ES/CVaR +#' @author Ross Bennett etl_milp_opt <- function(R, constraints, moments, target, alpha){ # Number of rows @@ -389,3 +437,132 @@ #out$call <- call # add this outside of here, this function doesn't have the call return(out) } + +##### minimize variance or maximize quadratic utility with turnover constraints ##### +#' Optimization function to solve minimum variance or maximum quadratic utility problems +#' +#' This function is called by optimize.portfolio to solve minimum variance or maximum quadratic utility problems +#' +#' @param R xts object of asset returns +#' @param constraints object of constraints in the portfolio object extracted with \code{get_constraints} +#' @param moments object of moments computed based on objective functions +#' @param lambda risk_aversion parameter +#' @param target target return value +#' @param init_weights initial weights to compute turnover +#' @author Ross Bennett +gmv_opt_toc <- function(R, constraints, moments, lambda, target, init_weights){ + # function for minimum variance or max quadratic utility problems + + # Modify the returns matrix. This is done because there are 3 sets of + # variables 1) w.initial, 2) w.buy, and 3) w.sell + returns <- cbind(R, R, R) + V <- cov(returns) + + # number of assets + N <- ncol(R) + + # initial weights for solver + if(is.null(init_weights)) init_weights <- rep(1/ N, N) + + # Amat for initial weights + Amat <- cbind(diag(N), matrix(0, nrow=N, ncol=N*2)) + rhs <- init_weights + dir <- rep("==", N) + meq <- 4 + + # check for a target return constraint + if(!is.na(target)) { + # If var is the only objective specified, then moments$mean won't be calculated + if(all(moments$mean==0)){ + tmp_means <- colMeans(R) + } else { + tmp_means <- moments$mean + } + Amat <- rbind(Amat, rep(tmp_means, 3)) + dir <- c(dir, "==") + rhs <- c(rhs, target) + meq <- 5 + } + + # Amat for full investment constraint + Amat <- rbind(Amat, rbind(rep(1, N*3), rep(-1, N*3))) + rhs <- c(rhs, constraints$min_sum, -constraints$max_sum) + dir <- c(dir, ">=", ">=") + + # Amat for lower box constraints + Amat <- rbind(Amat, cbind(diag(N), diag(N), diag(N))) + rhs <- c(rhs, constraints$min) + dir <- c(dir, rep(">=", N)) + + # Amat for upper box constraints + Amat <- rbind(Amat, cbind(-diag(N), -diag(N), -diag(N))) + rhs <- c(rhs, -constraints$max) + dir <- c(dir, rep(">=", N)) + + # Amat for turnover constraints + Amat <- rbind(Amat, c(rep(0, N), rep(-1, N), rep(1, N))) + rhs <- c(rhs, -constraints$toc) + dir <- c(dir, ">=") + + # Amat for positive weights + Amat <- rbind(Amat, cbind(matrix(0, nrow=N, ncol=N), diag(N), matrix(0, nrow=N, ncol=N))) + rhs <- c(rhs, rep(0, N)) + dir <- c(dir, rep(">=", N)) + + # Amat for negative weights + Amat <- rbind(Amat, cbind(matrix(0, nrow=N, ncol=2*N), -diag(N))) + rhs <- c(rhs, rep(0, N)) + dir <- c(dir, rep(">=", N)) + + # include group constraints + if(try(!is.null(constraints$groups), silent=TRUE)){ + n.groups <- length(constraints$groups) + Amat.group <- matrix(0, nrow=n.groups, ncol=N) + for(i in 1:n.groups){ + Amat.group[i, constraints$groups[[i]]] <- 1 + } + if(is.null(constraints$cLO)) cLO <- rep(-Inf, n.groups) + if(is.null(constraints$cUP)) cUP <- rep(Inf, n.groups) + Amat <- rbind(Amat, cbind(Amat.group, Amat.group, Amat.group)) + Amat <- rbind(Amat, cbind(-Amat.group, -Amat.group, -Amat.group)) + dir <- c(dir, rep(">=", (n.groups + n.groups))) + rhs <- c(rhs, constraints$cLO, -constraints$cUP) + } + + # Add the factor exposures to Amat, dir, and rhs + if(!is.null(constraints$B)){ + t.B <- t(constraints$B) + Amat <- rbind(Amat, cbind(t.B, t.B, t.B)) + Amat <- rbind(Amat, cbind(-t.B, -t.B, -t.B)) + dir <- c(dir, rep(">=", 2 * nrow(t.B))) + rhs <- c(rhs, constraints$lower, -constraints$upper) + } + + d <- rep(-moments$mean, 3) + + qp.result <- try(solve.QP(Dmat=make.positive.definite(2*lambda*V), + dvec=d, Amat=t(Amat), bvec=rhs, meq=meq), silent=TRUE) + if(inherits(qp.result, "try-error")) stop("No solution found, consider adjusting constraints.") + + wts <- qp.result$solution + wts.final <- wts[(1:N)] + wts[(1+N):(2*N)] + wts[(2*N+1):(3*N)] + + weights <- wts.final + names(weights) <- colnames(R) + out <- list() + out$weights <- weights + out$out <- qp.result$val + return(out) + + # TODO + # Get this working with ROI + + # Not getting solution using ROI + # set up the quadratic objective + # ROI_objective <- Q_objective(Q=make.positive.definite(2*lambda*V), L=rep(-moments$mean, 3)) + + # opt.prob <- OP(objective=ROI_objective, + # constraints=L_constraint(L=Amat, dir=dir, rhs=rhs)) + # roi.result <- ROI_solve(x=opt.prob, solver="quadprog") +} + Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-26 19:17:35 UTC (rev 2898) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-27 02:53:19 UTC (rev 2899) @@ -703,10 +703,17 @@ if("var" %in% names(moments)){ # Minimize variance if the only objective specified is variance # Maximize Quadratic Utility if var and mean are specified as objectives - roi_result <- gmv_opt(R=R, constraints=constraints, moments=moments, lambda=lambda, target=target) - weights <- roi_result$weights - obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures - out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) + if(!is.null(constraints$turnover_target)){ + qp_result <- gmv_opt_toc(R=R, constraints=constraints, moments=moments, lambda=lambda, target=target, init_weights=portfolio$assets) + weights <- qp_result$weights + obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures + out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) + } else { + roi_result <- gmv_opt(R=R, constraints=constraints, moments=moments, lambda=lambda, target=target) + weights <- roi_result$weights + obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures + out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) + } } if(length(names(moments)) == 1 & "mean" %in% names(moments)) { # Maximize return if the only objective specified is mean Added: pkg/PortfolioAnalytics/man/etl_milp_opt.Rd =================================================================== --- pkg/PortfolioAnalytics/man/etl_milp_opt.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/etl_milp_opt.Rd 2013-08-27 02:53:19 UTC (rev 2899) @@ -0,0 +1,27 @@ +\name{etl_milp_opt} +\alias{etl_milp_opt} +\title{Optimization function to solve minimum variance or maximum quadratic utility problems} +\usage{ + etl_milp_opt(R, constraints, moments, target, alpha) +} +\arguments{ + \item{R}{xts object of asset returns} + + \item{constraints}{object of constraints in the portfolio + object extracted with \code{get_constraints}} + + \item{moments}{object of moments computed based on + objective functions} + + \item{target}{target return value} + + \item{alpha}{alpha value for ETL/ES/CVaR} +} +\description{ + This function is called by optimize.portfolio to solve + minimum variance or maximum quadratic utility problems +} +\author{ + Ross Bennett +} + Added: pkg/PortfolioAnalytics/man/etl_opt.Rd =================================================================== --- pkg/PortfolioAnalytics/man/etl_opt.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/etl_opt.Rd 2013-08-27 02:53:19 UTC (rev 2899) @@ -0,0 +1,27 @@ +\name{etl_opt} +\alias{etl_opt} +\title{Optimization function to solve minimum variance or maximum quadratic utility problems} +\usage{ + etl_opt(R, constraints, moments, target, alpha) +} +\arguments{ + \item{R}{xts object of asset returns} + + \item{constraints}{object of constraints in the portfolio + object extracted with \code{get_constraints}} + + \item{moments}{object of moments computed based on + objective functions} + + \item{target}{target return value} + + \item{alpha}{alpha value for ETL/ES/CVaR} +} +\description{ + This function is called by optimize.portfolio to solve + minimum variance or maximum quadratic utility problems +} +\author{ + Ross Bennett +} + Added: pkg/PortfolioAnalytics/man/gmv_opt.Rd =================================================================== --- pkg/PortfolioAnalytics/man/gmv_opt.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/gmv_opt.Rd 2013-08-27 02:53:19 UTC (rev 2899) @@ -0,0 +1,27 @@ +\name{gmv_opt} +\alias{gmv_opt} +\title{Optimization function to solve minimum variance or maximum quadratic utility problems} +\usage{ + gmv_opt(R, constraints, moments, lambda, target) +} +\arguments{ + \item{R}{xts object of asset returns} + + \item{constraints}{object of constraints in the portfolio + object extracted with \code{get_constraints}} + + \item{moments}{object of moments computed based on + objective functions} + + \item{lambda}{risk_aversion parameter} + + \item{target}{target return value} +} +\description{ + This function is called by optimize.portfolio to solve + minimum variance or maximum quadratic utility problems +} +\author{ + Ross Bennett +} + Added: pkg/PortfolioAnalytics/man/gmv_opt_toc.Rd =================================================================== --- pkg/PortfolioAnalytics/man/gmv_opt_toc.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/gmv_opt_toc.Rd 2013-08-27 02:53:19 UTC (rev 2899) @@ -0,0 +1,30 @@ +\name{gmv_opt_toc} +\alias{gmv_opt_toc} +\title{Optimization function to solve minimum variance or maximum quadratic utility problems} +\usage{ + gmv_opt_toc(R, constraints, moments, lambda, target, + init_weights) +} +\arguments{ + \item{R}{xts object of asset returns} + + \item{constraints}{object of constraints in the portfolio + object extracted with \code{get_constraints}} + + \item{moments}{object of moments computed based on + objective functions} + + \item{lambda}{risk_aversion parameter} + + \item{target}{target return value} + + \item{init_weights}{initial weights to compute turnover} +} +\description{ + This function is called by optimize.portfolio to solve + minimum variance or maximum quadratic utility problems +} +\author{ + Ross Bennett +} + Added: pkg/PortfolioAnalytics/man/maxret_milp_opt.Rd =================================================================== --- pkg/PortfolioAnalytics/man/maxret_milp_opt.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/maxret_milp_opt.Rd 2013-08-27 02:53:19 UTC (rev 2899) @@ -0,0 +1,25 @@ +\name{maxret_milp_opt} +\alias{maxret_milp_opt} +\title{Optimization function to solve minimum variance or maximum quadratic utility problems} +\usage{ + maxret_milp_opt(R, constraints, moments, target) +} +\arguments{ + \item{R}{xts object of asset returns} + + \item{constraints}{object of constraints in the portfolio + object extracted with \code{get_constraints}} + + \item{moments}{object of moments computed based on + objective functions} + + \item{target}{target return value} +} +\description{ + This function is called by optimize.portfolio to solve + minimum variance or maximum quadratic utility problems +} +\author{ + Ross Bennett +} + Added: pkg/PortfolioAnalytics/man/maxret_opt.Rd =================================================================== --- pkg/PortfolioAnalytics/man/maxret_opt.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/maxret_opt.Rd 2013-08-27 02:53:19 UTC (rev 2899) @@ -0,0 +1,25 @@ +\name{maxret_opt} +\alias{maxret_opt} +\title{Optimization function to solve minimum variance or maximum quadratic utility problems} +\usage{ + maxret_opt(R, moments, constraints, target) +} +\arguments{ + \item{R}{xts object of asset returns} + + \item{constraints}{object of constraints in the portfolio + object extracted with \code{get_constraints}} + + \item{moments}{object of moments computed based on + objective functions} + + \item{target}{target return value} +} +\description{ + This function is called by optimize.portfolio to solve + minimum variance or maximum quadratic utility problems +} +\author{ + Ross Bennett +} + Modified: pkg/PortfolioAnalytics/sandbox/testing_turnover.gmv.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_turnover.gmv.R 2013-08-26 19:17:35 UTC (rev 2898) +++ pkg/PortfolioAnalytics/sandbox/testing_turnover.gmv.R 2013-08-27 02:53:19 UTC (rev 2899) @@ -2,132 +2,156 @@ # script to solve the gmv optimization problem with turnover constraints using quadprog or ROI library(PortfolioAnalytics) -library(PerformanceAnalytics) +library(ROI) +library(ROI.plugin.quadprog) library(quadprog) library(corpcor) -# TODO Add documentation for function +data(edhec) +R <- edhec[, 1:4] +init.weights <- rep(1/4, 4) +min <- rep(0.1, 4) +max <- rep(0.6, 4) +toc <- 0.8 +lambda <- 0.25 +mu <- colMeans(R) +target <- 0.0071 + +group_mat <- cbind(c(1, 1, 0, 0), c(0, 0, 1, 1)) +grp_min <- c(0.05, 0.05) +grp_max <- c(0.85, 0.85) + # Computes optimal weights for global minimum variance portfolio with # constraints including turnover constraint -turnover.gmv <- function(R, toc, weight.i, min, max){ - - # number of assets in R - p <- ncol(R) - - # Modify the returns matrix. This is done because there are 3 sets of - # variables w.initial, w.buy, and w.sell - returns <- cbind(R, R, R) - - V <- cov(returns) - V <- make.positive.definite(V) - - # A matrix for initial weights - A2 <- cbind(rep(1, p*3), rbind(diag(p), matrix(0, nrow=2*p, ncol=p))) - - # A matrix for lower box constraints - Alo <- rbind(diag(p), diag(p), diag(p)) - - # A matrix for upper box constraints - Aup <- rbind(-diag(p), -diag(p), -diag(p)) - - # vector to appply turnover constraint - A3 <- c(rep(0, p), rep(-1, p), rep(1, p)) - - # matrix for positive weight - A4 <- rbind(matrix(0, nrow=p, ncol=p), diag(p), matrix(0, nrow=p, ncol=p)) - - # matrix for negative weight - A5 <- rbind(matrix(0, nrow=p*2, ncol=p), -diag(p)) - - # Combine the temporary A matrices - A.c <- cbind(A2, Alo, Aup, A3, A4, A5) - - # b vector holding the values of the constraints - b <- c(1, weight.i, min, -max, -toc, rep(0, 2*p)) - - # no linear term so set this equal to 0s - d <- rep(0, p*3) - - sol <- solve.QP(Dmat=V, dvec=d, Amat=A.c, bvec=b, meq=6) - wts <- sol$solution - wts.final <- wts[(1:p)] + wts[(1+p):(2*p)] + wts[(2*p+1):(3*p)] - wts.final -} -data(edhec) -ret <- edhec[,1:5] +# number of assets in R +p <- ncol(R) -# box constraints min and max -min <- rep(0.1, 5) -max <- rep(0.6, 5) +# Modify the returns matrix. This is done because there are 3 sets of +# variables w.initial, w.buy, and w.sell +returns <- cbind(R, R, R) -# turnover constraint -toc <- 0.3 +V <- cov(returns) +V <- make.positive.definite(V) -# Initial weights vector -weight.i <- rep(1/5,5) +# A matrix for full investment, mean, and initial weights +A2 <- cbind(rbind(diag(p), matrix(0, nrow=2*p, ncol=p)), + rep(mu, 3), + rep(1, p*3), rep(-1, p*3)) -opt.wts <- turnover.gmv(R=ret, toc=toc, weight.i=weight.i, min=min, max=max) -opt.wts +# A matrix for lower box constraints +Alo <- rbind(diag(p), diag(p), diag(p)) +# A matrix for upper box constraints +Aup <- rbind(-diag(p), -diag(p), -diag(p)) + +# vector to appply turnover constraint +A3 <- c(rep(0, p), rep(-1, p), rep(1, p)) + +# matrix for positive weight +A4 <- rbind(matrix(0, nrow=p, ncol=p), diag(p), matrix(0, nrow=p, ncol=p)) + +# matrix for negative weight +A5 <- rbind(matrix(0, nrow=p*2, ncol=p), -diag(p)) + +# Combine the temporary A matrices +A.c <- cbind(A2, Alo, Aup, A3, A4, A5) + +# Constraints matrix for group_mat +A.c <- cbind(A.c, rbind(group_mat, group_mat, group_mat)) +A.c <- cbind(A.c, rbind(-group_mat, -group_mat, -group_mat)) + +# b vector holding the values of the constraints +b <- c(init.weights, target, 0.99, -1.01, min, -max, -toc, rep(0, 2*p)) +b <- c(b, grp_min, -grp_max) + +# no linear term so set this equal to 0s +d <- rep(0, p*3) +# d <- rep(-mu, 3) + +sol <- try(solve.QP(Dmat=make.positive.definite(2*lambda*V), dvec=d, + Amat=A.c, bvec=b, meq=4), silent=TRUE) +if(inherits(sol, "try-error")) message("No solution found") +wts <- sol$solution +wts.final <- wts[(1:p)] + wts[(1+p):(2*p)] + wts[(2*p+1):(3*p)] +wts.final +sum(wts.final) +wts.final %*% mu + # calculate turnover -to <- sum(abs(diff(rbind(weight.i, opt.wts)))) +to <- sum(abs(diff(rbind(init.weights, wts.final)))) to ##### ROI Turnover constraints using ROI solver ##### -# Not working correctly. -# Getting a solution now, but results are different than turnover.gmv -# library(ROI) -# library(ROI.plugin.quadprog) -# -# -# # Use the first 5 funds in edhec for the returns data -# ret <- edhec[, 1:5] -# returns <- cbind(ret, ret, ret) -# -# V <- cov(returns) -# V <- corpcor:::make.positive.definite(V) -# mu <- apply(returns, 2, mean) -# # number of assets -# N <- ncol(returns) -# -# # Set the box constraints for the minimum and maximum asset weights -# min <- rep(0.1, N/3) -# max <- rep(0.6, N/3) -# -# # Set the bounds -# bnds <- list(lower = list(ind = seq.int(1L, N/3), val = as.numeric(min)), -# upper = list(ind = seq.int(1L, N/3), val = as.numeric(max))) -# lambda <- 1 -# ROI_objective <- ROI:::Q_objective(Q=2*lambda*V, L=-mu*0) -# -# # Set up the Amat -# # min_sum and max_sum of weights -# A1 <- rbind(rep(1, N), rep(1, N)) -# -# # initial weight matrix -# A.iw <- cbind(diag(N/3), matrix(0, nrow=N/3, ncol=2*N/3)) -# -# # turnover vector -# A.t <- c(rep(0, N/3), rep(-1, N/3), rep(1, N/3)) -# -# A.wpos <- t(cbind(rbind(matrix(0, ncol=N/3, nrow=N/3), diag(N/3), matrix(0, ncol=N/3, nrow=N/3)), -# rbind(matrix(0, ncol=N/3, nrow=2*N/3), -diag(N/3)))) -# -# Amat <- rbind(A1, A.iw, A.t, A.wpos) -# -# dir.vec <- c(">=","<=", rep("==", N/3), "<=", rep(">=", 2*N/3)) -# min_sum=1 -# max_sum=1 -# w.init <- rep(1/5, 5) -# toc <- 0.3 -# rhs.vec <- c(min_sum, max_sum, w.init, toc, rep(0, 2*N/3)) -# -# opt.prob <- ROI:::OP(objective=ROI_objective, -# constraints=ROI:::L_constraint(L=Amat, dir=dir.vec, rhs=rhs.vec), -# bounds=bnds) -# roi.result <- ROI:::ROI_solve(x=opt.prob, solver="quadprog") -# wts.tmp <- roi.result$solution -# wts <- wts.tmp[1:(N/3)] + wts.tmp[(N/3+1):(2*N/3)] + wts.tmp[(2*N/3+1):N] -# wts \ No newline at end of file +N <- ncol(R) + +tmpR <- cbind(R, R, R) +V <- var(tmpR) +V <- corpcor:::make.positive.definite(V) +# lambda <- 0.25 +# mu <- colMeans(R) + +# Amat for full investment constraint +Amat <- rbind(rep(1, N*3), rep(1, N*3)) +rhs <- c(1, 1) +dir <- c("==", "==") +# dir <- c(">=", "<=") + +# Amat for initial weights +Amat <- rbind(Amat, cbind(diag(N), matrix(0, nrow=N, ncol=N*2))) +rhs <- c(rhs, init.weights) +dir <- c(dir, rep("==", N)) + +# Amat for lower box constraints +Amat <- rbind(Amat, cbind(diag(N), diag(N), diag(N))) +rhs <- c(rhs, min) +dir <- c(dir, rep(">=", N)) + +# Amat for upper box constraints +Amat <- rbind(Amat, cbind(-diag(N), -diag(N), -diag(N))) +rhs <- c(rhs, -max) +dir <- c(dir, rep(">=", N)) + +# Amat for turnover constraints +Amat <- rbind(Amat, c(rep(0, N), rep(-1, N), rep(1, N))) +rhs <- c(rhs, -toc) +dir <- c(dir, ">=") + +# Amat for positive weights +Amat <- rbind(Amat, cbind(matrix(0, nrow=N, ncol=N), diag(N), matrix(0, nrow=N, ncol=N))) +rhs <- c(rhs, rep(0, N)) +dir <- c(dir, rep(">=", N)) + +# Amat for negative weights +Amat <- rbind(Amat, cbind(matrix(0, nrow=N, ncol=2*N), -diag(N))) +rhs <- c(rhs, rep(0, N)) +dir <- c(dir, rep(">=", N)) + +# set up the quadratic objective +ROI_objective <- Q_objective(Q=make.positive.definite(2 * lambda * V), L=rep(0, N*3)) + +opt.prob <- OP(objective=ROI_objective, + constraints=L_constraint(L=Amat, dir=dir, rhs=rhs)) +roi.result <- ROI_solve(x=opt.prob, solver="quadprog") +# not sure why no solution with ROI +print.default(roi.result) + +# check that the same constraints matrix and rhs vectors are used +all.equal(t(Amat), A.c, check.attributes=FALSE) +all.equal(rhs, b) + +# run solve.QP using Amat and rhs from ROI problem +qp.result <- solve.QP(Dmat=make.positive.definite(2*lambda*V), + dvec=rep(0, N*3), Amat=t(Amat), bvec=rhs, meq=6) + +# results with solve.QP are working, but not with ROI +all.equal(qp.result$solution, sol$solution) +all.equal(roi.result$solution, sol$solution) + +roi.result$solution +qp.result$solution +sol$solution + + + From noreply at r-forge.r-project.org Tue Aug 27 07:10:37 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 07:10:37 +0200 (CEST) Subject: [Returnanalytics-commits] r2900 - pkg/PortfolioAnalytics/vignettes Message-ID: <20130827051037.99727184695@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-27 07:10:37 +0200 (Tue, 27 Aug 2013) New Revision: 2900 Modified: pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw pkg/PortfolioAnalytics/vignettes/ROI_vignette.pdf pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw pkg/PortfolioAnalytics/vignettes/portfolio_vignette.pdf Log: Modifying vignettes to add new group constraint specification Modified: pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw 2013-08-27 02:53:19 UTC (rev 2899) +++ pkg/PortfolioAnalytics/vignettes/ROI_vignette.Rnw 2013-08-27 05:10:37 UTC (rev 2900) @@ -198,7 +198,9 @@ # Add group constraints portf_minvar <- add.constraint(portfolio=portf_minvar, type="group", - groups=c(1, 2, 1), + groups=list(groupA=1, + groupB=c(2, 3), + groupC=4), group_min=c(0, 0.25, 0.10), group_max=c(0.45, 0.6, 0.5)) @ Modified: pkg/PortfolioAnalytics/vignettes/ROI_vignette.pdf =================================================================== (Binary files differ) Modified: pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw 2013-08-27 02:53:19 UTC (rev 2899) +++ pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw 2013-08-27 05:10:37 UTC (rev 2900) @@ -110,7 +110,8 @@ <<>>= # Add group constraints pspec <- add.constraint(portfolio=pspec, type="group", - groups=c(3, 1), + groups=list(groupA=c(1, 2, 3), + grouB=4), group_min=c(0.1, 0.15), group_max=c(0.85, 0.55), group_labels=c("GroupA", "GroupB")) Modified: pkg/PortfolioAnalytics/vignettes/portfolio_vignette.pdf =================================================================== (Binary files differ) From noreply at r-forge.r-project.org Tue Aug 27 12:30:38 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 12:30:38 +0200 (CEST) Subject: [Returnanalytics-commits] r2901 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . data man src vignettes Message-ID: <20130827103038.7407A184F90@r-forge.r-project.org> Author: shubhanm Date: 2013-08-27 12:30:38 +0200 (Tue, 27 Aug 2013) New Revision: 2901 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/data/ pkg/PerformanceAnalytics/sandbox/Shubhankit/data/HAM3-data.csv pkg/PerformanceAnalytics/sandbox/Shubhankit/data/edhec.csv pkg/PerformanceAnalytics/sandbox/Shubhankit/data/edhec.rda pkg/PerformanceAnalytics/sandbox/Shubhankit/data/managers.csv pkg/PerformanceAnalytics/sandbox/Shubhankit/data/managers.rda pkg/PerformanceAnalytics/sandbox/Shubhankit/data/portfolio_bacon.csv pkg/PerformanceAnalytics/sandbox/Shubhankit/data/portfolio_bacon.rda pkg/PerformanceAnalytics/sandbox/Shubhankit/data/prices.rda pkg/PerformanceAnalytics/sandbox/Shubhankit/data/weights.rda pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.normDD.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/src/.Rhistory pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/Cheklov.CDDOpt.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/Commodity_ResearchReport.Rnw Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rbuildignore pkg/PerformanceAnalytics/sandbox/Shubhankit/man/EMaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/GLMReturn.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/GLMSmoothIndex.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/LoSharpe.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/LoSharpeRatio.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/MaximumLoss.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/OkunevWhite.Rnw Log: /.Rnw Modified +Addition Demo for Commodity Traded Fund Returns [Ongoing] Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rbuildignore =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rbuildignore 2013-08-27 05:10:37 UTC (rev 2900) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rbuildignore 2013-08-27 10:30:38 UTC (rev 2901) @@ -1,5 +1,7 @@ -sandbox -generatechangelog\.sh -ChangeLog\.1\.0\.0 -week* -Week* +sandbox +generatechangelog\.sh +ChangeLog\.1\.0\.0 +week* +Week* +^.*\.Rproj$ +^\.Rproj\.user$ Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/data/HAM3-data.csv =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/data/HAM3-data.csv (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/data/HAM3-data.csv 2013-08-27 10:30:38 UTC (rev 2901) @@ -0,0 +1,153 @@ +,HAM31,HAM32,HAM33,HAM34,HAM35,HAM36,HAM37,HFRX Commodity Index,DJUBS Commodity,Morningstar CLS,Newedge CTI +2001-01-31,,,-0.0365,0.0989,,,,,-0.0235,-0.0087,-0.0061 +2001-02-28,,,0.0051,-0.1894,,,,,-0.0042,0.0083,0.0087 +2001-03-31,,,0.0909,-0.0115,,,,,-0.0426,0.0244,0.0086 +2001-04-30,,,0.0446,0.0574,,,,,0.0352,0.0202,0.0055 +2001-05-31,,,-0.1056,-0.1345,,,,,-0.021,0.0103,-0.0082 +2001-06-30,,,0.1509,0.1151,,,,,-0.0399,-0.0141,0.0049 +2001-07-31,,,0.0736,0.0789,,,,,0.013,-0.0028,0.0428 +2001-08-31,,,-0.0578,-0.0866,,,,,-0.0004,0.0014,-0.0393 +2001-09-30,,,-0.0411,-0.14,,,,,-0.0675,0.0071,0.0364 +2001-10-31,,,-0.0572,-0.0306,,,,,-0.0475,-0.0014,0.0527 +2001-11-30,,,0.0504,-0.0296,,,,,0.0077,-0.0284,0.0082 +2001-12-31,,,-0.0205,0.1326,,,,,-0.0197,0.0196,-0.0449 +2002-01-31,,,-0.0268,-0.0114,,,,,-0.0067,-0.0025,-0.0073 +2002-02-28,,,0.1251,0.0546,,,,,0.0259,0.0138,0.0244 +2002-03-31,,,0.0854,-0.092,,,,,0.1023,-0.0086,0.0082 +2002-04-30,,,0.0245,0.2026,,,,,0,0.0179,0.0025 +2002-05-31,,,0.124,0.2568,,,,,-0.0154,-0.0308,0.0317 +2002-06-30,,,0.0884,-0.0148,,,,,0.0194,0.0172,0.0123 +2002-07-31,,,0.0199,-0.0853,,,,,-0.0054,0.0013,0.0727 +2002-08-31,,,0.005,-0.083,,,,,0.0394,0.0326,0.0605 +2002-09-30,,,0.0064,-0.1063,,,,,0.0377,0.0325,0.0119 +2002-10-31,,,0.0406,0.2691,,,,,-0.0103,-0.0221,0.0578 +2002-11-30,,,0.0021,0.1537,,,,,0.0028,-0.0144,0.1085 +2002-12-31,,,0.0043,-0.127,,,,,0.049,0.0647,-0.0056 +2003-01-31,,,0.0063,-0.0039,,,,,0.077,0.0689,-0.0059 +2003-02-28,,,-0.0008,-0.1253,,,,,0.0336,0.0463,-0.0038 +2003-03-31,,,0,0.1891,,,,,-0.0754,-0.0828,-0.0507 +2003-04-30,,,-0.0235,0.1473,,,,,-0.0062,-0.01,-0.027 +2003-05-31,,,0.0172,0.1452,,,,,0.0585,0.0214,0.0173 +2003-06-30,,,-0.0078,-0.0079,,,,,-0.0247,-0.0177,-0.0017 +2003-07-31,,,0.0017,-0.0183,,,,,0.006,-0.0018,0.1263 +2003-08-31,,,0.0089,0.05,,,,,0.0395,-0.0015,0.0365 +2003-09-30,,,0.001,-0.0014,,,,,0.0008,-0.0242,0.0405 +2003-10-31,,,-0.0019,0.077,,,,,0.0478,0.0361,0.0998 +2003-11-30,,,0.0023,0.0613,,,,,-0.0031,0.0118,0.014 +2003-12-31,,,0.0033,0.1046,,,,,0.0738,0.035,0.0166 +2004-01-31,,,0.004,0.0064,,,,,0.0181,0.0028,0.009 +2004-02-29,,,0.0057,0.0467,,,,,0.0649,0.075,0.0586 +2004-03-31,,,0.0054,0.0491,,,,,0.0309,0.0311,0.0641 +2004-04-30,,,-0.017,-0.1178,,,-0.0127,,-0.0177,0.0098,-0.0047 +2004-05-31,,,0.0101,-0.0506,,,0.0068,,0.017,0.0029,-0.01 +2004-06-30,,,-0.0198,0.0048,,,0.0274,,-0.0415,-0.0114,-0.0186 +2004-07-31,,,0.0152,-0.0383,,,0.0542,,0.0177,0.0275,0.0335 +2004-08-31,,,0.0326,-0.023,,,-0.0176,,-0.0182,-0.0444,0.0216 +2004-09-30,,,0.0086,0.0069,,,0.0942,,0.0685,0.1082,0.0489 +2004-10-31,,,0.0022,-0.0151,,,-0.0428,,0.0169,0.036,0.0028 +2004-11-30,,,0.0355,0.0769,,,0.0043,,-0.012,-0.0168,0.0246 +2004-12-31,,,0.0039,0.0523,,,0.0154,,-0.0491,-0.0442,-0.0114 +2005-01-31,,,0.0032,-0.1002,,,0.0027,-0.0208,0.0084,0.0517,0.0094 +2005-02-28,,,0.0262,0.0328,,,-0.0037,0.0496,0.0686,-0.0161,-0.0224 +2005-03-31,,,-0.0065,-0.1257,,,0.0067,0.0155,0.0332,0.0398,-0.0188 +2005-04-30,,,0.0215,-0.0746,,,-0.0104,-0.0032,-0.0604,-0.0353,-0.0151 +2005-05-31,,,0.0075,0.0527,,,-0.0115,0.0335,-0.0102,-0.0132,0.0165 +2005-06-30,,,0.017,0.0289,,,0.0508,0.0336,0.0143,0.0137,0.0395 +2005-07-31,,,0.0121,0.0648,,,0.0112,0.0077,0.0421,0.0221,0.0202 +2005-08-31,,,-0.0239,0.0337,,,0.0104,-0.0118,0.0722,0.0935,0.0477 +2005-09-30,,,0.0208,0.1873,,,-0.0091,0.0272,0.0435,0.0099,0.0126 +2005-10-31,,,-0.0004,-0.0305,,,-0.0255,0.0094,-0.0657,-0.0649,-0.0036 +2005-11-30,,,0.0038,0.0887,,,0.0171,0.0352,-0.0008,-0.003,0.0271 +2005-12-31,,,0.0069,0.0786,,,0.0228,0.0357,0.0286,0.0071,0.0221 +2006-01-31,,,0.0253,0.0778,,0.0947,0.0259,0.0547,0.0147,0.0352,0.0789 +2006-02-28,,,0.0127,-0.0533,,-0.0226,0.0092,0.0145,-0.0659,-0.0382,-0.0068 +2006-03-31,,,0.0023,0.1126,,0.0247,-0.0023,0.0465,0.0183,0.0283,0.0437 +2006-04-30,,,0.0238,0.1367,,0.0041,0.0294,0.0245,0.0641,0.028,0.058 +2006-05-31,,,0.0055,0.0014,,0.0328,0.0066,-0.0162,0.0052,-0.0082,-0.0005 +2006-06-30,,,0.0025,-0.0348,,0.015,0.007,-0.017,-0.0195,-0.0114,-0.0205 +2006-07-31,-0.0043,,-0.0017,-0.0052,,-0.0288,-0.0082,-0.0191,0.0281,0.0189,0.0015 +2006-08-31,0.0021,,0.0023,0.0269,,0.0056,0.0311,-0.0142,-0.0402,-0.0249,-0.006 +2006-09-30,0.0287,0.0157,-0.0289,-0.0022,,-0.0406,0.0215,-0.0079,-0.061,-0.036,-0.0145 +2006-10-31,0.1026,0.0443,0.0406,0.0177,,0.0506,0.0394,0.0236,0.0397,-0.0189,0.0339 +2006-11-30,0.0394,0.0347,0.0216,0.0185,,0.1012,0.0154,0.0445,0.0503,0.0102,0.0416 +2006-12-31,-0.0244,-0.0019,0.0176,-0.0084,,0.0188,-0.0015,0.0156,-0.0497,-0.0146,0.002 +2007-01-31,0.0021,-0.0458,-0.0169,0.0241,,-0.0077,0.0136,-0.008,-0.0025,0.0067,0.0059 +2007-02-28,0.0115,0.0589,0.0166,0.004,,0.0455,0.0275,0.0129,0.0296,0.019,0.0237 +2007-03-31,-0.0028,0.0354,0.0146,0.0005,,-0.1005,-0.0112,-0.0067,0.0056,-0.0106,-0.0004 +2007-04-30,-0.0175,0.0223,0.0015,0.0436,,-0.0284,-0.0089,0.014,0.0073,0.0077,-0.0002 +2007-05-31,0.0108,-0.0119,0.0222,0.0132,,0.0104,-0.0019,0.0061,-0.0028,0.0091,0.0023 +2007-06-30,0.0367,0.0109,0.0125,0.0074,,0.065,0.0184,0.0314,-0.0177,-0.0046,0.0205 +2007-07-31,0.0129,0.0231,0.0066,0.0223,,-0.0306,0.0262,-0.0139,0.0164,-0.0034,0.0058 +2007-08-31,-0.0071,0.0994,-0.0371,-0.0001,,-0.0247,-0.0324,-0.0051,-0.0399,0.018,-0.0206 +2007-09-30,0.0429,0.0467,0.0597,0.0544,,0.0763,0.0682,0.0483,0.0766,0.0604,0.0567 +2007-10-31,0.0149,0.1069,0.0305,0.0662,,0.0216,0.0424,0.0368,0.0296,0.039,0.0257 +2007-11-30,-0.0177,0.0497,0.005,0.0153,,0.0624,0.0164,-0.0022,-0.0342,0.0051,-0.0054 +2007-12-31,0.0458,0.1335,0.0267,0.0084,,0.0837,0.0421,0.015,0.0435,0.0504,0.0432 +2008-01-31,0.0829,0.0737,0.0461,-0.0002,,0.0202,0.0475,0.03,0.0396,0.0218,0.0293 +2008-02-29,0.1477,-0.017,0.0757,0.0327,,0.0152,0.1941,0.0911,0.1208,0.1162,0.0768 +2008-03-31,0.0207,0.0781,-0.0166,-0.1282,,-0.0305,-0.0607,-0.0129,-0.0646,-0.0509,-0.0417 +2008-04-30,0.0021,0.0904,0.0085,-0.0432,,0.0159,-0.0035,-0.0042,0.0345,0.0386,0.014 +2008-05-31,-0.0036,0.0162,0.0153,-0.0269,,0.0192,-0.0047,0.0014,0.0259,0.0481,0.0228 +2008-06-30,0.0465,0.0322,0.0315,-0.0038,,0.0445,0.0457,0.027,0.0892,0.0899,0.0422 +2008-07-31,-0.0042,-0.0505,0.0103,-0.0343,,0.0015,-0.0691,-0.0201,-0.1199,-0.0916,-0.0409 +2008-08-31,0.0152,-0.0169,-0.0311,0.0717,,-0.0126,-0.032,-0.0057,-0.0742,-0.1089,-0.0241 +2008-09-30,-0.0002,-0.0228,-0.0496,0.0062,,-0.0334,-0.0317,-0.0012,-0.1164,0.0036,-0.0282 +2008-10-31,0.0387,-0.0314,-0.0736,0.1335,,-0.0199,-0.0015,0.0333,-0.2134,0.0764,-0.0283 +2008-11-30,-0.0199,-0.0781,-0.0237,0.0158,,-0.0001,-0.0025,-0.0014,-0.0699,0.0036,0.0019 +2008-12-31,0.004,-0.0162,-0.0007,-0.0196,,0.0004,-0.0026,0.0056,-0.0449,-0.0123,0.0075 +2009-01-31,-0.0124,0.0328,0.0019,0.0225,,-0.0042,0.0058,-0.0027,-0.0538,-0.0055,0.0055 +2009-02-28,-0.0162,-0.0437,-0.0194,0.0053,,-0.0309,0.0026,-0.0056,-0.0445,0.0154,-0.002 +2009-03-31,-0.0349,0.0166,0.0111,0.0018,,0.003,0.0274,-0.0062,0.0358,-0.0296,0.0037 +2009-04-30,-0.019,0.0153,0.0192,0.0484,,0.0184,0.0195,-0.0124,0.0072,-0.0226,0.0044 +2009-05-31,0.0132,0.0018,0.0304,0.0759,,0.0436,0.0721,0.0107,0.1299,-0.0309,0.0469 +2009-06-30,-0.0229,0.0117,-0.0323,-0.0234,,-0.0308,-0.0083,-0.0069,-0.0191,0.0362,-0.0141 +2009-07-31,0.014,-0.0222,0.002,0.0245,,0.0014,0.0228,-0.0073,0.0321,-0.0079,0.0012 +2009-08-31,0.009,-0.0168,-0.0122,0.0097,,0.009,0.0302,0.0104,-0.0059,0.028,0.0024 +2009-09-30,-0.0138,0.0199,0.0053,0.034,,0.0035,-0.0174,-0.0122,0.0155,0.006,-0.0008 +2009-10-31,0.0027,0.0076,0.0137,-0.0066,,-0.0232,0.0147,-0.0011,0.0327,-0.0377,-0.0029 +2009-11-30,0.0063,-0.0187,0.0026,0.0401,,-0.0372,0.0309,0.0132,0.0351,0.0043,0.0207 +2009-12-31,-0.0085,0.017,-0.0218,0.0007,,0.0147,0.0202,-0.0099,0.0198,0.0091,-0.0002 +2010-01-31,0.0017,-0.0011,-0.0164,-0.041,,0.0006,-0.0353,-0.0313,-0.0729,-0.0456,-0.0152 +2010-02-28,-0.0068,0.0365,-0.0107,0.0063,,-0.0328,0.003,-0.0224,0.037,0.0238,0.0006 +2010-03-31,-0.0049,-0.0382,-0.0157,0.0356,,-0.0114,0.0201,0.0127,-0.0126,0.0218,-0.0017 +2010-04-30,0.0246,0.0161,0.0012,0.0172,,-0.0102,0.0144,0.0206,0.0193,0.0136,0.0053 +2010-05-31,0.0079,0.029,0.0032,-0.0192,,0.009,-0.0879,-0.0181,-0.0693,-0.0568,-0.0252 +2010-06-30,-0.002,0.0742,0.0091,-0.007,,-0.0288,-0.0086,-0.0144,0.0031,-0.0061,-0.0056 +2010-07-31,0.0173,-0.0128,0.0075,0.0063,,-0.0174,0.0184,0,0.0676,-0.04,0.0141 +2010-08-31,0.0069,0.0212,0.0168,-0.0182,,0.031,0.0056,-0.0014,-0.0255,0.0114,0.0071 +2010-09-30,0.0179,0.036,0.0254,0.0743,,0.0273,0.0454,0.0124,0.0725,0.0324,0.0348 +2010-10-31,0.0243,0.0305,0.0295,0.0451,,0.0895,0.0445,-0.0082,0.0497,0.0539,0.0322 +2010-11-30,-0.0006,0.006,-0.0143,0.0153,,-0.0386,0.0009,0.0147,-0.0037,0.0148,0.0012 +2010-12-31,0.0397,0.0377,0.0494,0.0467,,0.0515,0.0594,-0.0144,0.1067,0.0957,0.0429 +2011-01-31,0.0215,0.0227,0.0054,0.0083,,0.0291,0.01,0.0162,0.0099,0.0289,-0.0021 +2011-02-28,0.0246,-0.0029,0.0057,0.0252,,-0.0222,0.004,-0.0055,0.0132,0.0394,0.0149 +2011-03-31,-0.0064,0.0269,0.0279,-0.0234,,-0.0017,0.0173,0.0135,0.0206,0.0269,0.0008 +2011-04-30,0.0098,0.0292,0.0011,0.0399,,0.0355,0.0133,0.017,0.0346,0.0299,0.0155 +2011-05-31,-0.0226,0.0423,-0.0189,-0.0411,,0.0223,-0.0228,-0.0436,-0.0506,-0.0498,-0.0116 +2011-06-30,-0.0199,-0.0136,-0.0143,-0.0207,,-0.0005,-0.0245,-0.025,-0.0505,-0.0306,-0.0235 +2011-07-31,-0.0025,0.0035,-0.0016,0.0106,,0.0003,0.0155,0.0074,0.0296,0.0237,0.0099 +2011-08-31,0.0192,0.0141,0.0021,-0.0174,,0.0228,-0.0314,-0.0086,0.01,0.0026,0.0027 +2011-09-30,-0.032,-0.0357,-0.0729,-0.0657,,-0.0395,-0.0558,-0.0379,-0.1474,-0.0759,-0.0274 +2011-10-31,-0.0102,0.0087,0.0174,0.0498,,0.0028,0.0211,0.0204,0.0662,0.025,0.0099 +2011-11-30,-0.0068,0.0051,-0.0329,-0.0156,,-0.006,-0.002,-0.0119,-0.0222,0.028,-0.0026 +2011-12-31,-0.0124,-0.031,-0.0242,-0.0113,,0.0055,-0.015,-0.0415,-0.0375,-0.0248,0.0008 +2012-01-31,0.0046,0.0058,0.0175,0.0373,,-0.0137,-0.0037,0.0035,0.0247,0.0025,0.0056 +2012-02-29,0.0001,0.0007,0.0218,0.022,,0.0087,0.0082,0.006,0.0269,0.016,0.0023 +2012-03-31,-0.0265,-0.0033,-0.0178,0.0018,,0.0003,-0.0214,-0.0088,-0.0415,0.0041,-0.0058 +2012-04-30,-0.0146,0.0103,-0.002,-0.0198,,-0.0117,0.0136,-0.0027,-0.0043,0.0146,-0.0057 +2012-05-31,-0.001,-0.0251,-0.0204,-0.0575,,-0.0243,-0.0244,-0.0074,-0.0914,-0.0295,-0.0238 +2012-06-30,0.008,0.0252,0.0346,-0.0052,,0.0013,0.0138,0.0279,0.0549,-0.0307,-0.0018 +2012-07-31,0.0306,0.0235,0.0533,0.0026,0.0178,0.0323,0.0196,0.023,0.0647,-0.0384,0.0151 +2012-08-31,0.0034,0.0026,0.0052,0.0117,0.0265,0.059,0.0358,0.0183,0.013,0.0055,0.014 +2012-09-30,-0.0155,-0.0005,-0.002,0.0372,0.0097,-0.0279,-0.0023,-0.01,0.0171,-0.0261,-0.0045 +2012-10-31,-0.0214,-0.04,-0.0128,0.0045,0.0337,-0.0006,-0.0158,-0.0167,-0.0387,-0.0219,-0.0154 +2012-11-30,-0.0079,-0.0075,-0.0089,-0.0131,-0.01,-0.0022,-0.0096,-0.0021,0.0005,-0.0019,-0.0096 +2012-12-31,-0.0087,-0.0124,-0.0011,-0.0008,0.0008,-0.0074,0.0043,-0.0069,-0.0261,-0.0121,0.0016 +2013-01-31,0.0005,0,0.0069,-0.0041,-0.0007,-0.0046,0.0123,0.0092,0.024,0.0249,0.0067 +2013-02-28,-0.0119,0.0208,-0.0039,0.0135,-0.0091,0.005,0.0166,0.0025,-0.0409,0.0029,-0.0136 +2013-03-31,-0.0069,-0.0043,0.0049,0.0311,-0.0106,-0.0121,0.0042,-0.002,0.0067,-0.0005,-0.0032 +2013-04-30,-0.0035,0.0052,-0.005,0.0057,-0.0094,0.0133,-0.0053,-0.0057,-0.0279,-0.0209,-0.011 +2013-05-31,-0.0026,-0.0552,-0.0138,-0.0008,-0.0033,-0.0309,-0.0029,-0.0019,-0.0224,-0.0141,-0.0061 +2013-06-30,-0.0196,0.0169,,-0.0056,0.0363,0.0147,0.016,0.0117,-0.0471,0.0364,-0.0097 +2013-07-31,-0.004,0.0254,,0.01,0.0098,-0.0208,0.0082,0.0075,0.0136,0.0129,0.0123 + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/data/edhec.csv =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/data/edhec.csv (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/data/edhec.csv 2013-08-27 10:30:38 UTC (rev 2901) @@ -0,0 +1,153 @@ +,"Convertible Arbitrage","CTA Global","Distressed Securities","Emerging Markets","Equity Market Neutral","Event Driven","Fixed Income Arbitrage","Global Macro","Long/Short Equity","Merger Arbitrage","Relative Value","Short Selling","Funds of Funds" +1997-01-31,0.0119,0.0393,0.0178,0.0791,0.0189,0.0213,0.0191,0.0573,0.0281,0.0150,0.0180,-0.0166,0.0317 +1997-02-28,0.0123,0.0298,0.0122,0.0525,0.0101,0.0084,0.0122,0.0175,-0.0006,0.0034,0.0118,0.0426,0.0106 +1997-03-31,0.0078,-0.0021,-0.0012,-0.0120,0.0016,-0.0023,0.0109,-0.0119,-0.0084,0.0060,0.0010,0.0778,-0.0077 +1997-04-30,0.0086,-0.0170,0.0030,0.0119,0.0119,-0.0005,0.0130,0.0172,0.0084,-0.0001,0.0122,-0.0129,0.0009 +1997-05-31,0.0156,-0.0015,0.0233,0.0315,0.0189,0.0346,0.0118,0.0108,0.0394,0.0197,0.0173,-0.0737,0.0275 +1997-06-30,0.0212,0.0085,0.0217,0.0581,0.0165,0.0258,0.0108,0.0218,0.0223,0.0231,0.0198,-0.0065,0.0225 +1997-07-31,0.0193,0.0591,0.0234,0.0560,0.0247,0.0307,0.0095,0.0738,0.0454,0.0200,0.0181,-0.0429,0.0435 +1997-08-31,0.0134,-0.0473,0.0147,-0.0066,0.0017,0.0071,0.0087,-0.0180,0.0107,0.0079,0.0103,-0.0072,0.0051 +1997-09-30,0.0122,0.0198,0.0350,0.0229,0.0202,0.0329,0.0119,0.0290,0.0429,0.0197,0.0183,-0.0155,0.0334 +1997-10-31,0.0100,-0.0098,-0.0064,-0.0572,0.0095,0.0061,-0.0032,-0.0142,0.0010,0.0094,0.0079,0.0572,-0.0099 +1997-11-30,0.0000,0.0133,0.0054,-0.0378,0.0041,0.0134,0.0053,0.0106,-0.0026,0.0223,0.0111,0.0217,-0.0034 +1997-12-31,0.0068,0.0286,0.0073,0.0160,0.0066,0.0154,0.0079,0.0264,0.0104,0.0158,0.0082,0.0161,0.0089 +1998-01-31,0.0145,0.0104,0.0095,-0.0429,0.0060,0.0055,-0.0026,-0.0050,0.0013,0.0055,0.0132,0.0014,-0.0036 +1998-02-28,0.0146,-0.0065,0.0227,0.0339,0.0135,0.0294,0.0098,0.0128,0.0342,0.0212,0.0130,0.0155,0.0256 +1998-03-31,0.0144,0.0122,0.0252,0.0318,0.0179,0.0263,0.0128,0.0570,0.0336,0.0164,0.0145,0.0637,0.0373 +1998-04-30,0.0126,-0.0296,0.0165,0.0041,0.0067,0.0104,0.0075,0.0034,0.0120,0.0139,0.0145,0.0657,0.0125 +1998-05-31,0.0056,0.0193,0.0006,-0.0825,0.0080,-0.0083,0.0040,0.0095,-0.0087,-0.0009,0.0053,0.1437,-0.0072 +1998-06-30,-0.0006,0.0051,-0.0047,-0.0422,0.0108,0.0002,-0.0080,0.0120,0.0167,0.0072,0.0026,-0.0053,0.0021 +1998-07-31,0.0060,-0.0010,-0.0069,0.0019,0.0012,-0.0037,0.0106,0.0058,-0.0006,0.0007,0.0011,0.0343,-0.0007 +1998-08-31,-0.0319,0.0691,-0.0836,-0.1922,-0.0107,-0.0886,-0.0143,-0.0263,-0.0552,-0.0544,-0.0341,0.2463,-0.0616 +1998-09-30,-0.0196,0.0454,-0.0215,-0.0395,0.0061,-0.0110,-0.0362,-0.0059,0.0206,0.0076,0.0005,-0.0376,-0.0037 +1998-10-31,-0.0214,0.0004,-0.0029,0.0140,0.0052,0.0091,-0.0801,-0.0223,0.0169,0.0159,-0.0140,-0.1077,-0.0002 +1998-11-30,0.0269,-0.0089,0.0164,0.0430,0.0158,0.0244,0.0052,0.0194,0.0291,0.0220,0.0198,-0.0756,0.0220 +1998-12-31,0.0113,0.0221,0.0108,-0.0098,0.0209,0.0219,0.0120,0.0233,0.0408,0.0224,0.0164,-0.0531,0.0222 +1999-01-31,0.0219,-0.0167,0.0181,-0.0120,0.0101,0.0201,0.0158,0.0086,0.0258,0.0112,0.0195,-0.0665,0.0202 +1999-02-28,0.0082,0.0197,-0.0021,0.0102,0.0023,-0.0042,0.0208,-0.0111,-0.0169,0.0036,0.0085,0.0833,-0.0063 +1999-03-31,0.0136,-0.0065,0.0159,0.0585,0.0033,0.0193,0.0160,0.0024,0.0229,0.0133,0.0116,-0.0154,0.0213 +1999-04-30,0.0243,0.0210,0.0418,0.0630,0.0107,0.0429,0.0106,0.0329,0.0312,0.0218,0.0238,-0.0375,0.0400 +1999-05-31,0.0166,-0.0150,0.0207,0.0061,0.0089,0.0215,0.0072,-0.0055,0.0095,0.0210,0.0146,0.0009,0.0119 +1999-06-30,0.0102,0.0234,0.0273,0.0654,0.0168,0.0297,0.0088,0.0214,0.0315,0.0222,0.0148,-0.0412,0.0282 +1999-07-31,0.0101,-0.0051,0.0084,-0.0061,0.0135,0.0096,0.0051,-0.0018,0.0177,0.0147,0.0110,0.0092,0.0088 +1999-08-31,0.0048,-0.0027,0.0020,-0.0147,0.0095,-0.0027,-0.0028,-0.0061,0.0022,0.0050,0.0062,0.0468,0.0028 +1999-09-30,0.0096,0.0064,-0.0041,-0.0069,0.0095,0.0090,0.0092,-0.0002,0.0113,0.0116,0.0105,0.0401,0.0052 +1999-10-31,0.0045,-0.0354,0.0027,0.0288,0.0066,0.0054,0.0087,0.0073,0.0212,0.0096,0.0070,-0.0130,0.0130 +1999-11-30,0.0124,0.0166,0.0220,0.0692,0.0133,0.0284,0.0106,0.0405,0.0481,0.0237,0.0137,-0.1239,0.0483 +1999-12-31,0.0140,0.0142,0.0300,0.1230,0.0198,0.0286,0.0097,0.0612,0.0745,0.0090,0.0183,-0.1137,0.0622 +2000-01-31,0.0227,0.0128,0.0088,0.0077,0.0075,0.0088,0.0041,0.0021,0.0075,0.0143,0.0173,0.0427,0.0169 +2000-02-29,0.0267,-0.0022,0.0421,0.0528,0.0253,0.0346,0.0097,0.0408,0.0699,0.0239,0.0185,-0.1340,0.0666 +2000-03-31,0.0243,-0.0138,0.0103,0.0318,0.0134,0.0069,-0.0061,-0.0104,0.0006,0.0131,0.0163,-0.0230,0.0039 +2000-04-30,0.0223,-0.0241,-0.0101,-0.0541,0.0168,-0.0059,-0.0006,-0.0304,-0.0201,0.0188,0.0092,0.1028,-0.0269 +2000-05-31,0.0149,0.0114,-0.0132,-0.0433,0.0062,-0.0034,0.0107,-0.0070,-0.0097,0.0146,0.0080,0.0704,-0.0122 +2000-06-30,0.0179,-0.0124,0.0203,0.0334,0.0171,0.0268,0.0058,0.0154,0.0349,0.0167,0.0176,-0.1107,0.0311 +2000-07-31,0.0093,-0.0131,0.0064,0.0025,0.0063,0.0057,0.0018,0.0037,0.0006,0.0116,0.0084,0.0553,-0.0022 +2000-08-31,0.0162,0.0189,0.0140,0.0368,0.0210,0.0173,0.0107,0.0248,0.0345,0.0157,0.0157,-0.1135,0.0267 +2000-09-30,0.0141,-0.0208,-0.0019,-0.0462,0.0058,0.0048,0.0076,-0.0149,-0.0016,0.0137,0.0075,0.1204,-0.0069 +2000-10-31,0.0052,0.0075,-0.0073,-0.0256,0.0040,-0.0068,0.0006,-0.0024,-0.0084,0.0026,-0.0004,0.0784,-0.0104 +2000-11-30,-0.0081,0.0425,-0.0209,-0.0385,0.0045,-0.0136,0.0066,0.0125,-0.0153,0.0102,0.0006,0.1657,-0.0205 +2000-12-31,-0.0002,0.0682,0.0001,0.0116,0.0160,0.0127,0.0048,0.0472,0.0248,0.0125,0.0075,0.0063,0.0133 +2001-01-31,0.0344,0.0025,0.0308,0.0586,0.0075,0.0298,0.0163,0.0214,0.0165,0.0111,0.0333,-0.0271,0.0223 +2001-02-28,0.0182,-0.0016,0.0100,-0.0221,0.0120,0.0045,0.0054,-0.0072,-0.0264,0.0054,0.0030,0.1021,-0.0089 +2001-03-31,0.0162,0.0438,-0.0037,-0.0175,0.0108,-0.0042,0.0051,0.0038,-0.0199,-0.0061,-0.0011,0.0620,-0.0068 +2001-04-30,0.0157,-0.0362,0.0048,0.0114,0.0075,0.0110,0.0094,0.0049,0.0246,0.0058,0.0174,-0.0991,0.0104 +2001-05-31,0.0033,0.0081,0.0235,0.0278,0.0077,0.0185,0.0068,0.0032,0.0043,0.0161,0.0141,-0.0130,0.0080 +2001-06-30,0.0012,-0.0077,0.0360,0.0160,0.0017,0.0063,0.0017,0.0017,0.0019,-0.0087,0.0019,0.0110,0.0013 +2001-07-31,0.0091,-0.0040,0.0073,-0.0286,0.0031,0.0049,0.0054,-0.0040,-0.0144,0.0079,0.0010,0.0353,-0.0040 +2001-08-31,0.0142,0.0153,0.0106,0.0030,0.0094,0.0090,0.0105,0.0006,-0.0096,0.0099,-0.0031,0.0752,0.0019 +2001-09-30,0.0078,0.0246,-0.0014,-0.0425,0.0023,-0.0254,-0.0013,-0.0070,-0.0348,-0.0267,-0.0221,0.0941,-0.0142 +2001-10-31,0.0117,0.0336,0.0103,0.0278,0.0058,0.0148,0.0134,0.0208,0.0099,0.0085,0.0164,-0.0298,0.0095 +2001-11-30,0.0080,-0.0543,0.0086,0.0483,0.0055,0.0105,-0.0024,0.0021,0.0200,0.0014,0.0136,-0.0655,0.0058 +2001-12-31,-0.0094,0.0148,0.0015,0.0421,0.0056,0.0107,0.0053,0.0138,0.0180,0.0045,0.0097,-0.0251,0.0099 +2002-01-31,0.0148,-0.0072,0.0186,0.0273,0.0065,0.0078,0.0086,0.0069,-0.0037,0.0077,0.0097,0.0343,0.0030 +2002-02-28,-0.0049,-0.0202,-0.0033,0.0181,-0.0007,-0.0071,0.0056,-0.0035,-0.0123,-0.0044,-0.0011,0.0390,-0.0015 +2002-03-31,0.0053,0.0009,0.0052,0.0331,0.0047,0.0153,0.0045,0.0064,0.0155,0.0073,0.0145,-0.0446,0.0090 +2002-04-30,0.0096,-0.0104,0.0139,0.0144,0.0076,0.0046,0.0113,0.0098,-0.0042,-0.0013,0.0070,0.0483,0.0052 +2002-05-31,0.0033,0.0270,0.0091,0.0001,0.0053,0.0001,0.0099,0.0123,-0.0034,0.0000,0.0031,0.0346,0.0050 +2002-06-30,0.0004,0.0655,-0.0117,-0.0292,0.0022,-0.0283,0.0069,-0.0022,-0.0249,-0.0170,-0.0107,0.0548,-0.0095 +2002-07-31,-0.0159,0.0413,-0.0133,-0.0309,-0.0013,-0.0300,0.0057,-0.0078,-0.0389,-0.0174,-0.0185,0.0644,-0.0140 +2002-08-31,0.0050,0.0220,0.0009,0.0119,0.0069,0.0060,0.0097,0.0063,0.0041,0.0061,0.0058,0.0015,0.0037 +2002-09-30,0.0146,0.0284,-0.0044,-0.0252,0.0015,-0.0070,-0.0033,0.0054,-0.0160,-0.0028,-0.0110,0.0731,-0.0033 +2002-10-31,0.0104,-0.0376,-0.0031,0.0154,0.0016,0.0031,-0.0063,-0.0086,0.0123,0.0032,0.0084,-0.0405,-0.0031 +2002-11-30,0.0251,-0.0164,0.0239,0.0190,0.0025,0.0216,0.0054,0.0047,0.0224,0.0054,0.0185,-0.0547,0.0106 +2002-12-31,0.0157,0.0489,0.0222,0.0048,0.0094,0.0044,0.0153,0.0192,-0.0149,0.0046,0.0023,0.0443,0.0077 +2003-01-31,0.0283,0.0441,0.0243,0.0012,0.0083,0.0154,0.0106,0.0182,0.0005,0.0040,0.0067,0.0162,0.0072 +2003-02-28,0.0133,0.0402,0.0092,0.0084,0.0024,0.0026,0.0079,0.0166,-0.0037,0.0018,-0.0004,0.0130,0.0031 +2003-03-31,0.0089,-0.0445,0.0113,0.0019,0.0015,0.0083,0.0019,-0.0122,0.0020,-0.0007,0.0049,-0.0075,-0.0004 +2003-04-30,0.0150,0.0065,0.0345,0.0450,0.0031,0.0272,0.0091,0.0117,0.0298,0.0099,0.0186,-0.0656,0.0134 +2003-05-31,0.0136,0.0490,0.0270,0.0433,0.0107,0.0301,0.0207,0.0397,0.0362,0.0154,0.0212,-0.0499,0.0205 +2003-06-30,-0.0058,-0.0192,0.0267,0.0268,0.0034,0.0181,0.0044,0.0056,0.0128,0.0048,0.0071,-0.0162,0.0068 +2003-07-31,-0.0072,-0.0171,0.0117,0.0104,-0.0006,0.0119,-0.0092,-0.0035,0.0118,0.0053,0.0041,-0.0361,0.0025 +2003-08-31,-0.0087,0.0078,0.0137,0.0374,0.0031,0.0133,0.0043,0.0202,0.0179,0.0070,0.0058,-0.0354,0.0078 +2003-09-30,0.0171,-0.0019,0.0242,0.0264,0.0078,0.0133,0.0105,0.0215,0.0094,0.0077,0.0086,0.0136,0.0121 +2003-10-31,0.0146,0.0104,0.0267,0.0259,0.0115,0.0191,0.0035,0.0111,0.0299,0.0111,0.0159,-0.0656,0.0152 +2003-11-30,0.0092,0.0018,0.0154,0.0096,0.0046,0.0116,0.0069,0.0031,0.0130,0.0044,0.0102,-0.0136,0.0070 +2003-12-31,0.0054,0.0381,0.0198,0.0403,0.0054,0.0172,0.0101,0.0293,0.0191,0.0098,0.0127,-0.0178,0.0139 +2004-01-31,0.0119,0.0199,0.0301,0.0251,0.0109,0.0234,0.0092,0.0117,0.0192,0.0097,0.0146,-0.0090,0.0156 +2004-02-29,0.0017,0.0529,0.0075,0.0253,0.0063,0.0113,0.0084,0.0150,0.0123,0.0051,0.0057,0.0018,0.0111 +2004-03-31,0.0061,-0.0051,0.0046,0.0172,0.0032,0.0016,0.0003,0.0064,0.0041,0.0017,0.0038,-0.0148,0.0043 +2004-04-30,0.0020,-0.0532,0.0093,-0.0252,-0.0082,0.0002,0.0062,-0.0178,-0.0165,-0.0039,-0.0045,0.0384,-0.0068 +2004-05-31,-0.0128,-0.0118,-0.0010,-0.0181,0.0024,-0.0023,0.0040,-0.0081,-0.0035,0.0000,-0.0037,-0.0024,-0.0082 +2004-06-30,-0.0106,-0.0316,0.0202,0.0020,0.0042,0.0113,0.0055,-0.0019,0.0091,0.0017,0.0022,-0.0051,0.0034 +2004-07-31,0.0013,-0.0119,0.0019,-0.0027,0.0006,-0.0082,0.0062,-0.0014,-0.0154,-0.0092,0.0007,0.0638,-0.0049 +2004-08-31,0.0040,-0.0084,0.0088,0.0133,-0.0009,0.0035,0.0036,-0.0039,-0.0022,0.0011,0.0031,0.0126,-0.0010 +2004-09-30,-0.0017,0.0220,0.0104,0.0280,0.0085,0.0103,0.0012,0.0008,0.0210,0.0042,0.0052,-0.0216,0.0099 +2004-10-31,-0.0044,0.0358,0.0143,0.0185,-0.0005,0.0124,0.0028,0.0138,0.0074,0.0074,0.0040,-0.0092,0.0068 +2004-11-30,0.0081,0.0475,0.0337,0.0328,0.0140,0.0306,0.0075,0.0280,0.0308,0.0164,0.0149,-0.0574,0.0244 +2004-12-31,0.0056,0.0000,0.0266,0.0201,0.0058,0.0244,0.0060,0.0033,0.0178,0.0133,0.0099,-0.0391,0.0145 +2005-01-31,-0.0096,-0.0438,0.0037,0.0143,0.0081,0.0004,0.0044,-0.0047,-0.0017,0.0000,0.0012,0.0387,0.0006 +2005-02-28,-0.0058,0.0005,0.0134,0.0346,0.0080,0.0144,0.0085,0.0171,0.0210,0.0065,0.0081,0.0118,0.0136 +2005-03-31,-0.0140,-0.0006,0.0032,-0.0197,0.0019,-0.0004,0.0024,-0.0027,-0.0096,0.0032,-0.0042,0.0244,-0.0044 +2005-04-30,-0.0316,-0.0354,-0.0052,-0.0049,-0.0030,-0.0128,-0.0003,-0.0080,-0.0184,-0.0105,-0.0108,0.0393,-0.0141 +2005-05-31,-0.0133,0.0232,0.0006,0.0072,0.0047,0.0065,-0.0010,0.0088,0.0115,0.0095,-0.0002,-0.0475,0.0018 +2005-06-30,0.0107,0.0260,0.0133,0.0160,0.0081,0.0133,0.0010,0.0116,0.0195,0.0085,0.0095,-0.0032,0.0131 +2005-07-31,0.0164,-0.0013,0.0173,0.0257,0.0078,0.0215,0.0081,0.0119,0.0265,0.0115,0.0149,-0.0242,0.0134 +2005-08-31,0.0066,0.0100,0.0124,0.0152,0.0062,0.0092,0.0036,0.0083,0.0097,0.0061,0.0053,0.0259,0.0079 +2005-09-30,0.0142,0.0079,0.0112,0.0402,0.0087,0.0100,0.0062,0.0269,0.0222,0.0035,0.0122,0.0198,0.0147 +2005-10-31,-0.0015,-0.0092,-0.0032,-0.0230,0.0001,-0.0173,0.0057,-0.0074,-0.0174,-0.0145,-0.0038,0.0233,-0.0149 +2005-11-30,0.0004,0.0379,0.0100,0.0279,0.0061,0.0125,0.0015,0.0164,0.0211,0.0112,0.0067,-0.0300,0.0160 +2005-12-31,0.0092,-0.0153,0.0122,0.0284,0.0068,0.0142,0.0054,0.0135,0.0249,0.0138,0.0126,-0.0035,0.0191 +2006-01-31,0.0250,0.0174,0.0253,0.0526,0.0115,0.0341,0.0093,0.0258,0.0381,0.0272,0.0238,-0.0288,0.0286 +2006-02-28,0.0116,-0.0186,0.0065,0.0161,0.0046,0.0051,0.0041,0.0002,0.0016,0.0104,0.0073,0.0064,0.0037 +2006-03-31,0.0107,0.0284,0.0172,0.0122,0.0098,0.0185,0.0055,0.0094,0.0238,0.0144,0.0157,-0.0139,0.0164 +2006-04-30,0.0064,0.0387,0.0193,0.0365,0.0102,0.0164,0.0121,0.0238,0.0172,0.0119,0.0126,-0.0012,0.0171 +2006-05-31,0.0091,-0.0146,0.0086,-0.0389,0.0002,0.0008,0.0059,-0.0155,-0.0248,0.0009,-0.0025,0.0246,-0.0133 +2006-06-30,0.0012,-0.0142,-0.0015,-0.0097,0.0063,0.0012,0.0036,-0.0015,-0.0062,0.0087,0.0021,0.0118,-0.0028 +2006-07-31,0.0066,-0.0216,0.0009,0.0067,0.0051,-0.0011,0.0064,0.0006,-0.0031,0.0058,0.0017,0.0173,-0.0005 +2006-08-31,0.0098,0.0020,0.0099,0.0133,-0.0009,0.0112,0.0037,-0.0039,0.0114,0.0053,0.0092,-0.0156,0.0066 +2006-09-30,0.0093,-0.0055,0.0033,0.0011,0.0009,0.0035,0.0014,-0.0067,0.0005,0.0041,0.0040,-0.0236,-0.0003 +2006-10-31,0.0054,0.0102,0.0194,0.0257,0.0065,0.0206,0.0067,0.0097,0.0194,0.0132,0.0132,-0.0380,0.0163 +2006-11-30,0.0092,0.0226,0.0179,0.0323,0.0075,0.0182,0.0060,0.0199,0.0200,0.0142,0.0129,-0.0268,0.0185 +2006-12-31,0.0127,0.0146,0.0165,0.0291,0.0107,0.0168,0.0072,0.0116,0.0153,0.0133,0.0128,0.0039,0.0175 +2007-01-31,0.0130,0.0113,0.0150,0.0079,0.0083,0.0201,0.0069,0.0061,0.0121,0.0191,0.0135,-0.0107,0.0121 +2007-02-28,0.0117,-0.0144,0.0145,0.0100,0.0051,0.0207,0.0106,0.0018,0.0082,0.0255,0.0114,0.0028,0.0096 +2007-03-31,0.0060,-0.0141,0.0108,0.0185,0.0101,0.0146,0.0060,0.0027,0.0115,0.0063,0.0081,-0.0051,0.0096 +2007-04-30,0.0026,0.0241,0.0164,0.0255,0.0089,0.0197,0.0071,0.0152,0.0198,0.0160,0.0134,-0.0265,0.0163 +2007-05-31,0.0110,0.0230,0.0180,0.0270,0.0121,0.0213,0.0055,0.0192,0.0224,0.0171,0.0156,-0.0199,0.0204 +2007-06-30,0.0011,0.0229,0.0027,0.0236,0.0077,-0.0007,0.0048,0.0107,0.0077,-0.0053,0.0100,0.0236,0.0082 +2007-07-31,-0.0053,-0.0122,-0.0056,0.0275,0.0051,-0.0032,0.0007,0.0116,0.0009,-0.0054,0.0004,0.0486,0.0041 +2007-08-31,-0.0145,-0.0280,-0.0118,-0.0274,-0.0094,-0.0144,-0.0048,-0.0116,-0.0160,0.0001,-0.0077,0.0092,-0.0222 +2007-09-30,0.0161,0.0469,0.0095,0.0428,0.0123,0.0134,0.0164,0.0330,0.0256,0.0131,0.0153,-0.0207,0.0199 +2007-10-31,0.0177,0.0280,0.0175,0.0485,0.0168,0.0214,0.0114,0.0304,0.0281,0.0191,0.0200,-0.0026,0.0303 +2007-11-30,-0.0131,-0.0016,-0.0169,-0.0237,-0.0018,-0.0202,-0.0094,-0.0063,-0.0225,-0.0149,-0.0112,0.0719,-0.0148 +2007-12-31,-0.0077,0.0117,0.0002,0.0130,0.0054,0.0007,0.0036,0.0104,0.0043,-0.0025,0.0022,0.0056,0.0040 +2008-01-31,-0.0009,0.0255,-0.0233,-0.0503,-0.0112,-0.0271,-0.0012,-0.0010,-0.0400,-0.0126,-0.0118,0.0556,-0.0272 +2008-02-29,-0.0083,0.0620,0.0014,0.0280,0.0120,0.0084,-0.0049,0.0312,0.0140,0.0060,0.0064,0.0300,0.0142 +2008-03-31,-0.0317,-0.0056,-0.0126,-0.0379,-0.0049,-0.0168,-0.0306,-0.0169,-0.0236,-0.0045,-0.0162,0.0192,-0.0262 +2008-04-30,0.0076,-0.0078,0.0088,0.0190,0.0059,0.0118,0.0187,0.0078,0.0223,0.0149,0.0130,-0.0461,0.0097 +2008-05-31,0.0107,0.0162,0.0137,0.0163,0.0126,0.0176,0.0103,0.0114,0.0227,0.0136,0.0159,-0.0142,0.0172 +2008-06-30,-0.0081,0.0330,-0.0031,-0.0274,0.0156,-0.0113,-0.0027,0.0030,-0.0164,-0.0109,-0.0084,0.0751,-0.0068 +2008-07-31,-0.0188,-0.0333,-0.0182,-0.0330,-0.0100,-0.0166,-0.0023,-0.0213,-0.0261,0.0011,-0.0125,0.0072,-0.0264 +2008-08-31,-0.0066,-0.0114,-0.0072,-0.0336,-0.0135,-0.0025,-0.0003,-0.0133,-0.0146,0.0051,-0.0023,-0.0215,-0.0156 +2008-09-30,-0.1027,0.0010,-0.0518,-0.0982,-0.0285,-0.0627,-0.0506,-0.0313,-0.0675,-0.0276,-0.0538,0.0378,-0.0618 +2008-10-31,-0.1237,0.0345,-0.0775,-0.1331,-0.0044,-0.0625,-0.0867,-0.0157,-0.0629,-0.0245,-0.0692,0.1170,-0.0600 +2008-11-30,-0.0276,0.0214,-0.0435,-0.0391,-0.0587,-0.0301,-0.0308,0.0033,-0.0188,0.0006,-0.0209,0.0428,-0.0192 +2008-12-31,0.0177,0.0140,-0.0197,-0.0010,0.0005,-0.0071,-0.0035,0.0118,0.0081,0.0162,0.0031,-0.0146,-0.0119 +2009-01-31,0.0491,-0.0016,0.0082,-0.0112,0.0079,0.0132,0.0112,0.0029,-0.0017,0.0056,0.0100,0.0282,0.0060 +2009-02-28,0.0164,-0.0031,-0.0122,-0.0133,-0.0046,-0.0091,0.0065,-0.0055,-0.0161,0.0006,-0.0016,0.0328,-0.0037 +2009-03-31,0.0235,-0.0180,0.0022,0.0350,0.0021,0.0117,0.0057,0.0048,0.0188,0.0125,0.0100,-0.0462,0.0008 +2009-04-30,0.0500,-0.0140,0.0387,0.0663,-0.0012,0.0337,0.0221,0.0127,0.0375,0.0081,0.0342,-0.0820,0.0092 +2009-05-31,0.0578,0.0213,0.0504,0.0884,0.0146,0.0442,0.0365,0.0348,0.0516,0.0107,0.0392,0.0008,0.0312 +2009-06-30,0.0241,-0.0147,0.0198,0.0013,0.0036,0.0123,0.0126,-0.0076,0.0009,0.0104,0.0101,-0.0094,0.0024 +2009-07-31,0.0611,-0.0012,0.0311,0.0451,0.0042,0.0291,0.0322,0.0166,0.0277,0.0068,0.0260,-0.0596,0.0153 +2009-08-31,0.0315,0.0054,0.0244,0.0166,0.0070,0.0207,0.0202,0.0050,0.0157,0.0102,0.0162,-0.0165,0.0113 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/data/edhec.rda =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/data/edhec.rda ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/data/managers.csv =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/data/managers.csv (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/data/managers.csv 2013-08-27 10:30:38 UTC (rev 2901) @@ -0,0 +1,133 @@ +,HAM1,HAM2,HAM3,HAM4,HAM5,HAM6,EDHEC LS EQ,SP500 TR,US 10Y TR,US 3m TR +1996-01-31,0.0074,,0.0349,0.0222,,,,0.034,0.0038,0.00456 +1996-02-29,0.0193,,0.0351,0.0195,,,,0.0093,-0.03532,0.00398 +1996-03-31,0.0155,,0.0258,-0.0098,,,,0.0096,-0.01057,0.00371 +1996-04-30,-0.0091,,0.0449,0.0236,,,,0.0147,-0.01739,0.00428 +1996-05-31,0.0076,,0.0353,0.0028,,,,0.0258,-0.00543,0.00443 +1996-06-30,-0.0039,,-0.0303,-0.0019,,,,0.0038,0.01507,0.00412 +1996-07-31,-0.0231,,-0.0337,-0.0446,,,,-0.0442,-0.001,0.00454 +1996-08-31,0.0395,-0.0001,0.0461,0.0351,,,,0.0211,-0.00448,0.00451 +1996-09-30,0.0147,0.1002,0.0653,0.0757,,,,0.0563,0.02229,0.0047 +1996-10-31,0.0288,0.0338,0.0395,-0.018,,,,0.0276,0.02869,0.00428 +1996-11-30,0.0156,0.0737,0.0666,0.0458,,,,0.0756,0.02797,0.00427 +1996-12-31,0.0176,0.0298,0.0214,0.0439,,,,-0.0198,-0.02094,0.00442 +1997-01-31,0.0212,0.0794,0.0771,0.0437,,,0.0281,0.0625,-0.00055,0.00457 +1997-02-28,0.0022,-0.0082,-0.0374,0.0312,,,-0.0006,0.0078,-0.00167,0.0039 +1997-03-31,0.0094,-0.0269,-0.0336,0.0113,,,-0.0084,-0.0411,-0.01958,0.00422 +1997-04-30,0.0126,-0.0061,0.0286,0.0354,,,0.0084,0.0597,0.01954,0.00477 +1997-05-31,0.0438,0.0539,0.0759,0.0789,,,0.0394,0.0609,0.01033,0.00513 +1997-06-30,0.0231,0.0552,0.0054,0.0412,,,0.0223,0.0448,0.01665,0.00365 +1997-07-31,0.0154,0.115,0.1081,0.0794,,,0.0454,0.0796,0.04161,0.0045 +1997-08-31,0.0237,-0.0197,-0.0028,0.0143,,,0.0107,-0.056,-0.02148,0.00428 +1997-09-30,0.0219,0.0576,0.0549,0.0217,,,0.0429,0.0548,0.02153,0.00458 +1997-10-31,-0.0207,-0.0222,-0.0354,0.0056,,,0.001,-0.0334,0.0262,0.00427 +1997-11-30,0.025,-0.0051,0.0176,-0.0067,,,-0.0026,0.0463,0.00241,0.0039 +1997-12-31,0.011,0.0192,-0.0003,0.0116,,,0.0104,0.0172,0.01311,0.00429 +1998-01-31,0.0056,-0.0112,0.0491,-0.0041,,,0.0013,0.0111,0.02067,0.00468 +1998-02-28,0.0429,0.1007,0.0466,0.0324,,,0.0342,0.0721,-0.00846,0.00355 +1998-03-31,0.0362,0.0625,0.0208,0.0404,,,0.0336,0.0512,0.00159,0.00473 +1998-04-30,0.0078,-0.001,0.0234,0.0242,,,0.012,0.0101,0.00333,0.00449 +1998-05-31,-0.0231,-0.0107,-0.0136,-0.0047,,,-0.0087,-0.0172,0.01102,0.00416 +1998-06-30,0.0121,0.0392,0.0395,-0.0133,,,0.0167,0.0406,0.01324,0.00419 +1998-07-31,-0.0215,-0.0272,0.0005,-0.0723,,,-0.0006,-0.0106,0.00051,0.00443 +1998-08-31,-0.0944,0,-0.0718,-0.1759,,,-0.0552,-0.1446,0.03923,0.00456 +1998-09-30,0.0248,-0.0046,0.0665,0.0549,,,0.0206,0.0641,0.05055,0.00511 +1998-10-31,0.0558,0.0349,-0.0051,-0.0503,,,0.0169,0.0813,-0.00991,0.00393 +1998-11-30,0.0126,0.0699,0.0555,0.0887,,,0.0291,0.0606,-0.00937,0.00333 +1998-12-31,0.0097,0.0913,0.0464,-0.0108,,,0.0408,0.0576,0.01028,0.00395 +1999-01-31,-0.0093,0.0787,0.0269,0.041,,,0.0258,0.0418,0.00417,0.00355 +1999-02-28,0.0094,-0.023,-0.053,-0.0549,,,-0.0169,-0.0311,-0.04474,0.00287 +1999-03-31,0.0462,0.1082,0.0187,-0.0615,,,0.0229,0.04,0.00884,0.00411 +1999-04-30,0.051,0.0166,0.0417,0.0684,,,0.0312,0.0387,-0.00427,0.00385 +1999-05-31,0.0162,-0.0002,0.0079,0.0635,,,0.0095,-0.0236,-0.01909,0.00389 +1999-06-30,0.0326,0.065,0.0547,0.0093,,,0.0315,0.0555,-0.00933,0.00418 +1999-07-31,0.0098,0.0279,0.0042,0.0249,,,0.0177,-0.0312,-0.00227,0.00408 +1999-08-31,-0.0165,0.0285,0.0122,-0.0407,,,0.0022,-0.0049,-0.00516,0.00402 +1999-09-30,-0.0045,0.0332,0.0068,-0.0231,,,0.0113,-0.0274,0.01128,0.0045 +1999-10-31,-0.0006,0.0422,0.0747,0.0253,,,0.0212,0.0633,-0.00487,0.00394 [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2901 From noreply at r-forge.r-project.org Tue Aug 27 18:05:30 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 18:05:30 +0200 (CEST) Subject: [Returnanalytics-commits] r2902 - in pkg/PortfolioAnalytics: R man sandbox Message-ID: <20130827160530.EA00B1846AF@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-27 18:05:30 +0200 (Tue, 27 Aug 2013) New Revision: 2902 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R Log: Adding tangency lines to efficient frontier plots. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-27 10:30:38 UTC (rev 2901) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-27 16:05:30 UTC (rev 2902) @@ -28,6 +28,10 @@ #' GenSA does not return any useable trace information for portfolios tested at #' each iteration, therfore we cannot extract and chart an efficient frontier. #' +#' By default, the tangency portfolio (maximum Sharpe Ratio or modified Sharpe Ratio) +#' will be plotted using a risk free rate of 0. Set \code{rf=NULL} to omit +#' this from the plot. +#' #' @param object optimal portfolio created by \code{\link{optimize.portfolio}} #' @param string name of column to use for risk (horizontal axis). #' \code{match.col} must match the name of an objective measure in the @@ -36,10 +40,13 @@ #' @param n.portfolios number of portfolios to use to plot the efficient frontier #' @param xlim set the x-axis limit, same as in \code{\link{plot}} #' @param ylim set the y-axis limit, same as in \code{\link{plot}} -#' @param cex.axis -#' @param element.color +#' @param cex.axis A numerical value giving the amount by which the axis should be magnified relative to the default. +#' @param element.color provides the color for drawing less-important chart elements, such as the box lines, axis lines, etc. #' @param main a main title for the plot #' @param ... passthrough parameters to \code{\link{plot}} +#' @param rf risk free rate. If \code{rf} is not null, the maximum Sharpe Ratio or modified Sharpe Ratio tangency portfolio will be plotted +#' @param cex.legend A numerical value giving the amount by which the legend should be magnified relative to the default. +#' @param RAR.text Risk Adjusted Return ratio text to plot in the legend #' @author Ross Bennett #' @export chart.EfficientFrontier <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ @@ -48,7 +55,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ +chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, cex.legend=0.8){ if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class optimize.portfolio.ROI") portf <- object$portfolio @@ -86,14 +93,23 @@ if(match.col %in% c("ETL", "ES", "CVaR")){ frontier <- meanetl.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) + rar <- "STARR" } if(match.col == "StdDev"){ frontier <- meanvar.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) + rar <- "Sharpe Ratio" } # data points to plot the frontier x.f <- frontier[, match.col] y.f <- frontier[, "mean"] + # Points for the Sharpe Ratio ((mu - rf) / StdDev) or STARR ((mu - rf) / ETL) + if(!is.null(rf)){ + sr <- (y.f - rf) / (x.f) + idx.maxsr <- which.max(sr) + srmax <- sr[idx.maxsr] + } + # set the x and y limits if(is.null(xlim)){ xlim <- range(c(x.f, asset_risk)) @@ -110,6 +126,15 @@ # plot the optimal portfolio points(opt_risk, opt_ret, col="blue", pch=16) # optimal text(x=opt_risk, y=opt_ret, labels="Optimal",col="blue", pos=4, cex=0.8) + if(!is.null(rf)){ + # Plot tangency line and points at risk-free rate and tangency portfolio + abline(rf, srmax, lty=2) + points(0, rf, pch=16) + points(x.f[idx.maxsr], y.f[idx.maxsr], pch=16) + # Add lengend with max Sharpe Ratio and risk-free rate + legend("topleft", paste("Max ", rar, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) + legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) + } axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) @@ -117,7 +142,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ +chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="Modified Sharpe", rf=0, cex.legend=0.8){ # This function will work with objects of class optimize.portfolio.DEoptim, # optimize.portfolio.random, and optimize.portfolio.pso @@ -163,6 +188,13 @@ x.f <- frontier[, match.col] y.f <- frontier[, "mean"] + # Points for the Sharpe or Modified Sharpe Ratio + if(!is.null(rf)){ + sr <- (y.f - rf) / (x.f) + idx.maxsr <- which.max(sr) + srmax <- sr[idx.maxsr] + } + # set the x and y limits if(is.null(xlim)){ xlim <- range(c(x.f, asset_risk)) @@ -179,6 +211,15 @@ # plot the optimal portfolio points(opt_risk, opt_ret, col="blue", pch=16) # optimal text(x=opt_risk, y=opt_ret, labels="Optimal",col="blue", pos=4, cex=0.8) + if(!is.null(rf)){ + # Plot tangency line and points at risk-free rate and tangency portfolio + abline(rf, srmax, lty=2) + points(0, rf, pch=16) + points(x.f[idx.maxsr], y.f[idx.maxsr], pch=16) + # Add lengend with max Sharpe Ratio and risk-free rate + legend("topleft", paste("Max ", RAR.text, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) + legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) + } axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) @@ -315,7 +356,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.efficient.frontier <- function(object, chart.assets=TRUE, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ +chart.EfficientFrontier.efficient.frontier <- function(object, chart.assets=TRUE, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="Modified Sharpe", rf=0, cex.legend=0.8){ if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") # get the returns and efficient frontier object @@ -354,6 +395,12 @@ } } + if(!is.null(rf)){ + sr <- (frontier[, mean.mtc] - rf) / (frontier[, mtc]) + idx.maxsr <- which.max(sr) + srmax <- sr[idx.maxsr] + } + # plot the efficient frontier line plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, pch=5, axes=FALSE, ...) if(chart.assets){ @@ -361,6 +408,15 @@ points(x=asset_risk, y=asset_ret) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) } + if(!is.null(rf)){ + # Plot tangency line and points at risk-free rate and tangency portfolio + abline(rf, srmax, lty=2) + points(0, rf, pch=16) + points(frontier[idx.maxsr, mtc], frontier[idx.maxsr, mean.mtc], pch=16) + # Add legend with max Risk adjusted Return ratio and risk-free rate + legend("topleft", paste("Max ", RAR.text, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) + legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) + } axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-27 10:30:38 UTC (rev 2901) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-27 16:05:30 UTC (rev 2902) @@ -14,19 +14,22 @@ match.col = "ES", n.portfolios = 25, xlim = NULL, ylim = NULL, cex.axis = 0.8, element.color = "darkgray", - main = "Efficient Frontier", ...) + main = "Efficient Frontier", ..., rf = 0, + cex.legend = 0.8) chart.EfficientFrontier.optimize.portfolio(object, match.col = "ES", n.portfolios = 25, xlim = NULL, ylim = NULL, cex.axis = 0.8, element.color = "darkgray", - main = "Efficient Frontier", ...) + main = "Efficient Frontier", ..., + RAR.text = "Modified Sharpe", rf = 0, cex.legend = 0.8) chart.EfficientFrontier.efficient.frontier(object, chart.assets = TRUE, match.col = "ES", n.portfolios = NULL, xlim = NULL, ylim = NULL, cex.axis = 0.8, element.color = "darkgray", - main = "Efficient Frontier", ...) + main = "Efficient Frontier", ..., + RAR.text = "Modified Sharpe", rf = 0, cex.legend = 0.8) } \arguments{ \item{object}{optimal portfolio created by @@ -47,13 +50,28 @@ \item{ylim}{set the y-axis limit, same as in \code{\link{plot}}} - \item{cex.axis}{} + \item{cex.axis}{A numerical value giving the amount by + which the axis should be magnified relative to the + default.} - \item{element.color}{} + \item{element.color}{provides the color for drawing + less-important chart elements, such as the box lines, + axis lines, etc.} \item{main}{a main title for the plot} \item{...}{passthrough parameters to \code{\link{plot}}} + + \item{rf}{risk free rate. If \code{rf} is not null, the + maximum Sharpe Ratio or modified Sharpe Ratio tangency + portfolio will be plotted} + + \item{cex.legend}{A numerical value giving the amount by + which the legend should be magnified relative to the + default.} + + \item{RAR.text}{Risk Adjusted Return ratio text to plot + in the legend} } \description{ This function charts the efficient frontier and @@ -86,6 +104,11 @@ GenSA does not return any useable trace information for portfolios tested at each iteration, therfore we cannot extract and chart an efficient frontier. + + By default, the tangency portfolio (maximum Sharpe Ratio + or modified Sharpe Ratio) will be plotted using a risk + free rate of 0. Set \code{rf=NULL} to omit this from the + plot. } \author{ Ross Bennett Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-27 10:30:38 UTC (rev 2901) +++ pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-27 16:05:30 UTC (rev 2902) @@ -38,6 +38,7 @@ # mean-var efficient frontier meanvar.ef <- create.EfficientFrontier(R=R, portfolio=meanvar.portf, type="mean-StdDev") chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="b") +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", rf=0) chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="StdDev") # run optimize.portfolio and chart the efficient frontier for that object @@ -48,7 +49,7 @@ chart.Weights.EF(opt_meanvar, match.col="StdDev") # or we can extract the efficient frontier and then plot it ef <- extractEfficientFrontier(object=opt_meanvar, match.col="StdDev", n.portfolios=15) -chart.Weights.EF(ef, match.col="var", colorset=bluemono) +chart.Weights.EF(ef, match.col="StdDev", colorset=bluemono) # mean-etl efficient frontier meanetl.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="mean-ES") @@ -57,9 +58,12 @@ # mean-etl efficient frontier using random portfolios meanetl.rp.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="random", match.col="ES") -chart.EfficientFrontier(meanetl.rp.ef, match.col="ES", main="mean-ETL RP Efficient Frontier", type="l", col="blue") +chart.EfficientFrontier(meanetl.rp.ef, match.col="ES", main="mean-ETL RP Efficient Frontier", type="l", col="blue", rf=0) chart.Weights.EF(meanetl.rp.ef, colorset=bluemono, match.col="ES") +# mean-etl efficient frontier with optimize.portfolio output +opt_meanetl <- optimize.portfolio(R=R, portfolio=meanetl.portf, optimize_method="random", search_size=2000, trace=TRUE) +chart.EfficientFrontier(meanetl.rp.ef, match.col="ES", main="mean-ETL RP Efficient Frontier", type="l", col="blue", rf=0, RAR.text="STARR") ##### overlay efficient frontiers of multiple portfolios ##### # Create a mean-var efficient frontier for multiple portfolios and overlay the efficient frontier lines From noreply at r-forge.r-project.org Tue Aug 27 19:53:01 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 19:53:01 +0200 (CEST) Subject: [Returnanalytics-commits] r2903 - pkg/PortfolioAnalytics/R Message-ID: <20130827175301.1152018548E@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-27 19:53:00 +0200 (Tue, 27 Aug 2013) New Revision: 2903 Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R pkg/PortfolioAnalytics/R/generics.R Log: Adding print and summary methods for efficient.frontier objects. Returning the portfolio object for extractEfficientFrontier and create.EfficientFrontier for reproducibility and generic methods. Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-27 16:05:30 UTC (rev 2902) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-27 17:53:00 UTC (rev 2903) @@ -296,7 +296,7 @@ create.EfficientFrontier <- function(R, portfolio, type, n.portfolios=25, match.col="ES", search_size=2000, ...){ # This is just a wrapper around a few functions to easily create efficient frontiers # given a portfolio object and other parameters - + call <- match.call() if(!is.portfolio(portfolio)) stop("portfolio must be of class 'portfolio'") type <- type[1] switch(type, @@ -334,8 +334,10 @@ n.portfolios=n.portfolios) } ) - return(structure(list(frontier=frontier, - R=R), class="efficient.frontier")) + return(structure(list(call=call, + frontier=frontier, + R=R, + portfolio=portfolio), class="efficient.frontier")) } #' Extract the efficient frontier data points @@ -368,7 +370,7 @@ #' @export extractEfficientFrontier <- function(object, match.col="ES", n.portfolios=25){ # extract the efficient frontier from an optimize.portfolio output object - + call <- match.call() if(!inherits(object, "optimize.portfolio")) stop("object must be of class 'optimize.portfolio'") if(inherits(object, "optimize.portfolio.GenSA")){ @@ -400,6 +402,9 @@ if(inherits(object, c("optimize.portfolio.random", "optimize.portfolio.DEoptim", "optimize.portfolio.pso"))){ frontier <- extract.efficient.frontier(object=object, match.col=match.col, n.portfolios=n.portfolios) } - return(frontier) + return(structure(list(call=call, + frontier=frontier, + R=R, + portfolio=portfolio), class="efficient.frontier")) } Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-27 16:05:30 UTC (rev 2902) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-27 17:53:00 UTC (rev 2903) @@ -642,3 +642,70 @@ print(object$elapsed_time) cat("\n") } + +#' Print an efficient frontier object +#' +#' Print method for efficient frontier objects. Display the call to create or +#' extract the efficient frontier object and the portfolio from which the +#' efficient frontier was created or extracted. +#' +#' @param x objective of class \code{efficient.frontier} +#' @param ... passthrough parameters +#' @author Ross Bennett +#' @export +print.efficient.frontier <- function(x, ...){ + if(!inherits(x, "efficient.frontier")) stop("object passed in is not of class 'efficient.frontier'") + + cat(rep("*", 50) ,"\n", sep="") + cat("PortfolioAnalytics Efficient Frontier", "\n") + cat(rep("*", 50) ,"\n", sep="") + + cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), + "\n\n", sep = "") + + cat("Efficient Frontier Points:", nrow(x$frontier), "\n\n") + + print(x$portfolio) +} + +#' Summarize an efficient frontier object +#' +#' Summary method for efficient frontier objects. Display the call to create or +#' extract the efficient frontier object as well as the weights and risk and +#' return metrics along the efficient frontier. +#' +#' @param x objective of class \code{efficient.frontier} +#' @param ... passthrough parameters +#' @author Ross Bennett +#' @export +summary.efficient.frontier <- function(object, ..., digits=3){ + if(!inherits(object, "efficient.frontier")) stop("object passed in is not of class 'efficient.frontier'") + + cat(rep("*", 50) ,"\n", sep="") + cat("PortfolioAnalytics Efficient Frontier", "\n") + cat(rep("*", 50) ,"\n", sep="") + + cat("\nCall:\n", paste(deparse(object$call), sep = "\n", collapse = "\n"), + "\n\n", sep = "") + + cat("Efficient Frontier Points:", nrow(object$frontier), "\n\n") + + # Weights + cnames <- colnames(object$frontier) + wts_idx <- grep(pattern="^w\\.", cnames) + wts <- round(object$frontier[, wts_idx], digits=digits) + colnames(wts) <- gsub("w.", "", colnames(wts)) + rownames(wts) <- 1:nrow(object$frontier) + cat("Weights along the efficient frontier:\n") + print(wts) + cat("\n") + + # Risk and return + cat("Risk and return metrics along the efficient frontier:\n") + riskret <- object$frontier[, -wts_idx] + rownames(riskret) <- 1:nrow(object$frontier) + print(round(riskret, digits=digits)) + cat("\n") + invisible(list(weights=wts, metrics=riskret)) +} + From noreply at r-forge.r-project.org Tue Aug 27 20:08:49 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 20:08:49 +0200 (CEST) Subject: [Returnanalytics-commits] r2904 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130827180850.0012A183913@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-27 20:08:48 +0200 (Tue, 27 Aug 2013) New Revision: 2904 Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/generics.R pkg/PortfolioAnalytics/man/summary.optimize.portfolio.Rd Log: Cleaning up summary method for optimize.portfolio objects. Invisibly returning data that is printed in summary method. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-27 17:53:00 UTC (rev 2903) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-27 18:08:48 UTC (rev 2904) @@ -90,6 +90,7 @@ export(pos_limit_fail) export(position_limit_constraint) export(print.constraint) +export(print.efficient.frontier) export(print.optimize.portfolio.DEoptim) export(print.optimize.portfolio.GenSA) export(print.optimize.portfolio.pso) @@ -112,6 +113,7 @@ export(set.portfolio.moments_v1) export(set.portfolio.moments_v2) export(set.portfolio.moments) +export(summary.efficient.frontier) export(summary.optimize.portfolio.rebalancing) export(summary.optimize.portfolio) export(summary.portfolio) Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-27 17:53:00 UTC (rev 2903) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-27 18:08:48 UTC (rev 2904) @@ -446,6 +446,7 @@ #' #' @param object an object of class "optimize.portfolio.pso" resulting from a call to optimize.portfolio #' @param ... any other passthru parameters. Currently not used. +#' @author Ross Bennett #' @export summary.optimize.portfolio <- function(object, ...){ @@ -526,8 +527,9 @@ } # group constraints - cat("Group Constraints:\n") + group_weights <- NULL if(!is.null(constraints$groups) & !is.null(constraints$cLO) & !is.null(constraints$cUP)){ + cat("Group Constraints:\n") cat("Groups:\n") groups <- constraints$groups group_labels <- constraints$group_labels @@ -599,8 +601,9 @@ cat("\n") # Factor exposure constraint - cat("Factor Exposure Constraints:\n") + tmpexp <- NULL if(!is.null(constraints$B) & !is.null(constraints$lower) & !is.null(constraints$upper)){ + cat("Factor Exposure Constraints:\n") t.B <- t(constraints$B) cat("Factor Exposure B Matrix:\n") print(constraints$B) @@ -641,6 +644,15 @@ cat("Elapsed Time:\n") print(object$elapsed_time) cat("\n") + invisible(list(weights=object$weights, + opt_values=object$objective_measures, + group_weights=group_weights, + factor_exposures=tmpexp, + diversification=diversification(object$weights), + turnover=turnover(object$weights, wts.init=object$portfolio$assets), + positions=sum(abs(object$weights) > tolerance), + long_positions=sum(object$weights > tolerance), + short_positions=sum(object$weights < -tolerance))) } #' Print an efficient frontier object Modified: pkg/PortfolioAnalytics/man/summary.optimize.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/summary.optimize.portfolio.Rd 2013-08-27 17:53:00 UTC (rev 2903) +++ pkg/PortfolioAnalytics/man/summary.optimize.portfolio.Rd 2013-08-27 18:08:48 UTC (rev 2904) @@ -14,4 +14,7 @@ \description{ summary method for class "optimize.portfolio" } +\author{ + Ross Bennett +} From noreply at r-forge.r-project.org Tue Aug 27 20:20:57 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 20:20:57 +0200 (CEST) Subject: [Returnanalytics-commits] r2905 - in pkg/PortfolioAnalytics: R man Message-ID: <20130827182058.08F8A183913@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-27 20:20:57 +0200 (Tue, 27 Aug 2013) New Revision: 2905 Added: pkg/PortfolioAnalytics/man/print.efficient.frontier.Rd pkg/PortfolioAnalytics/man/summary.efficient.frontier.Rd Modified: pkg/PortfolioAnalytics/R/generics.R Log: Adding documentation for summary and print methods for efficient.frontier objects. Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-27 18:08:48 UTC (rev 2904) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-27 18:20:57 UTC (rev 2905) @@ -42,49 +42,49 @@ #' Printing Portfolio Specification Objects #' -#' Print method for class "portfolio" +#' Print method for objects of class \code{portfolio} created with \code{\link{portfolio.spec}} #' -#' @param portfolio object of class portfolio +#' @param x object of class \code{portfolio} #' @author Ross Bennett #' @export -print.portfolio <- function(portfolio){ - if(!is.portfolio(portfolio)) stop("object passed in is not of class 'portfolio'") +print.portfolio <- function(x, ...){ + if(!is.portfolio(x)) stop("object passed in is not of class 'portfolio'") cat(rep("*", 50) ,"\n", sep="") cat("PortfolioAnalytics Portfolio Specification", "\n") cat(rep("*", 50) ,"\n", sep="") - cat("\nCall:\n", paste(deparse(portfolio$call), sep = "\n", collapse = "\n"), + cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n", sep = "") # Assets cat("\nAssets\n") - nassets <- length(portfolio$assets) + nassets <- length(x$assets) cat("Number of assets:", nassets, "\n\n") cat("Asset Names\n") - print(head(names(portfolio$assets), 10)) + print(head(names(x$assets), 10)) if(nassets > 10){ cat("More than 10 assets, only printing the first 10\n") } # Constraints cat("\nConstraints\n") - nconstraints <- length(portfolio$constraints) + nconstraints <- length(x$constraints) if(nconstraints > 0){ # logical vector of enabled constraints - enabled.constraints <- which(sapply(portfolio$constraints, function(x) x$enabled)) + enabled.constraints <- which(sapply(x$constraints, function(x) x$enabled)) n.enabled.constraints <- ifelse(length(enabled.constraints) > 0, length(enabled.constraints), 0) } else { enabled.constraints <- NULL n.enabled.constraints <- 0 } # character vector of constraint types - names.constraints <- sapply(portfolio$constraints, function(x) x$type) + names.constraints <- sapply(x$constraints, function(x) x$type) cat("Number of constraints:", nconstraints, "\n") cat("Number of enabled constraints:", n.enabled.constraints, "\n") if(length(enabled.constraints) > 0){ cat("Enabled constraint types\n") - constraints <- portfolio$constraints + constraints <- x$constraints nconstraints <- length(constraints) for(i in 1:nconstraints){ if(constraints[[i]]$enabled){ @@ -111,7 +111,7 @@ cat("Number of disabled constraints:", nconstraints - n.enabled.constraints, "\n") if((nconstraints - n.enabled.constraints) > 0){ cat("Disabled constraint types\n") - constraints <- portfolio$constraints + constraints <- x$constraints nconstraints <- length(constraints) for(i in 1:nconstraints){ if(!constraints[[i]]$enabled){ @@ -138,17 +138,17 @@ # Objectives cat("\nObjectives\n") - nobjectives <- length(portfolio$objectives) + nobjectives <- length(x$objectives) if(nobjectives > 0){ # logical vector of enabled objectives - enabled.objectives <- which(sapply(portfolio$objectives, function(x) x$enabled)) + enabled.objectives <- which(sapply(x$objectives, function(x) x$enabled)) n.enabled.objectives <- ifelse(length(enabled.objectives) > 0, length(enabled.objectives), 0) } else { enabled.objectives <- NULL n.enabled.objectives <- 0 } # character vector of objective names - names.objectives <- sapply(portfolio$objectives, function(x) x$name) + names.objectives <- sapply(x$objectives, function(x) x$name) cat("Number of objectives:", nobjectives, "\n") cat("Number of enabled objectives:", n.enabled.objectives, "\n") if(n.enabled.objectives > 0){ @@ -167,36 +167,36 @@ cat("\n") } -#' Summarizing Portfolio Specification Objects +#' Summarize Portfolio Specification Objects #' -#' summary method for class "portfolio" +#' summary method for class \code{portfolio} created with \code{\link{portfolio.spec}} #' -#' @param portfolio object of class portfolio +#' @param object object of class portfolio #' @author Ross Bennett #' @export -summary.portfolio <- function(portfolio){ - if(!is.portfolio(portfolio)) stop("object passed in is not of class 'portfolio'") +summary.portfolio <- function(object, ...){ + if(!is.portfolio(object)) stop("object passed in is not of class 'portfolio'") cat(rep("*", 50) ,"\n", sep="") cat("PortfolioAnalytics Portfolio Specification Summary", "\n") cat(rep("*", 50) ,"\n", sep="") cat("Assets and Seed Weights:\n") - print(portfolio$assets) + print(object$assets) cat("\n") - if(!is.null(portfolio$category_labels)) { + if(!is.null(object$category_labels)) { cat("Category Labels:\n") - print(portfolio$category_labels) + print(object$category_labels) } - if(!is.null(portfolio$weight_seq)) { + if(!is.null(object$weight_seq)) { cat("weight_seq:\n") - print(summary(portfolio$weight_seq)) + print(summary(object$weight_seq)) } cat("Constraints:\n\n") - for(constraint in portfolio$constraints){ + for(constraint in object$constraints){ if(constraint$enabled) { cat(rep("*", 40), "\n", sep="") cat(constraint$type, "constraint\n") @@ -207,7 +207,7 @@ } cat("Objectives:\n\n") - for(objective in portfolio$objectives){ + for(objective in object$objectives){ if(objective$enabled) { cat(rep("*", 40), "\n", sep="") cat(class(objective)[1], "\n") @@ -223,33 +223,33 @@ #' @param portfolio object of class constraint #' @author Ross Bennett #' @export -print.constraint <- function(obj){ - print.default(obj) +print.constraint <- function(x, ...){ + print.default(x) } #' Printing Output of optimize.portfolio #' #' print method for optimize.portfolio.ROI #' -#' @param object an object of class "optimize.portfolio.ROI" resulting from a call to optimize.portfolio +#' @param x an object of class \code{optimize.portfolio.ROI} resulting from a call to \code{\link{optimize.portfolio}} #' @param digits the number of significant digits to use when printing. #' @param ... any other passthru parameters #' @export -print.optimize.portfolio.ROI <- function(object, digits = max(3, getOption("digits") - 3), ...){ +print.optimize.portfolio.ROI <- function(x, ..., digits = max(3, getOption("digits") - 3)){ cat(rep("*", 35) ,"\n", sep="") cat("PortfolioAnalytics Optimization\n") cat(rep("*", 35) ,"\n", sep="") - cat("\nCall:\n", paste(deparse(object$call), sep = "\n", collapse = "\n"), + cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") # get optimal weights cat("Optimal Weights:\n") - print.default(object$weights, digits=digits) + print.default(x$weights, digits=digits) cat("\n") # get objective measure - objective_measures <- object$objective_measures + objective_measures <- x$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measure:\n") @@ -264,25 +264,25 @@ #' #' print method for optimize.portfolio.random #' -#' @param object an object of class "optimize.portfolio.random" resulting from a call to optimize.portfolio +#' @param x an object of class \code{optimize.portfolio.random} resulting from a call to \code{\link{optimize.portfolio}} #' @param digits the number of significant digits to use when printing. #' @param ... any other passthru parameters #' @export -print.optimize.portfolio.random <- function(object, digits=max(3, getOption("digits")-3), ...){ +print.optimize.portfolio.random <- function(x, ..., digits=max(3, getOption("digits")-3)){ cat(rep("*", 35) ,"\n", sep="") cat("PortfolioAnalytics Optimization\n") cat(rep("*", 35) ,"\n", sep="") - cat("\nCall:\n", paste(deparse(object$call), sep = "\n", collapse = "\n"), + cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") # get optimal weights cat("Optimal Weights:\n") - print.default(object$weights, digits=digits) + print.default(x$weights, digits=digits) cat("\n") # get objective measures - objective_measures <- object$objective_measures + objective_measures <- x$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measures:\n") @@ -295,7 +295,7 @@ tmpl <- objective_measures[[i]][j] cat(names(tmpl), ":\n") tmpv <- unlist(tmpl) - names(tmpv) <- names(object$weights) + names(tmpv) <- names(x$weights) print(tmpv) cat("\n") } @@ -309,25 +309,25 @@ #' #' print method for optimize.portfolio.DEoptim #' -#' @param object an object of class "optimize.portfolio.DEoptim" resulting from a call to optimize.portfolio +#' @param x an object of class \code{optimize.portfolio.DEoptim} resulting from a call to \code{\link{optimize.portfolio}} #' @param digits the number of significant digits to use when printing. #' @param ... any other passthru parameters #' @export -print.optimize.portfolio.DEoptim <- function(object, digits=max(3, getOption("digits")-3), ...){ +print.optimize.portfolio.DEoptim <- function(x, ..., digits=max(3, getOption("digits")-3)){ cat(rep("*", 35) ,"\n", sep="") cat("PortfolioAnalytics Optimization\n") cat(rep("*", 35) ,"\n", sep="") - cat("\nCall:\n", paste(deparse(object$call), sep = "\n", collapse = "\n"), + cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") # get optimal weights cat("Optimal Weights:\n") - print.default(object$weights, digits=digits) + print.default(x$weights, digits=digits) cat("\n") # get objective measures - objective_measures <- object$objective_measures + objective_measures <- x$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measures:\n") @@ -340,7 +340,7 @@ tmpl <- objective_measures[[i]][j] cat(names(tmpl), ":\n") tmpv <- unlist(tmpl) - names(tmpv) <- names(object$weights) + names(tmpv) <- names(x$weights) print(tmpv) cat("\n") } @@ -354,25 +354,25 @@ #' #' print method for optimize.portfolio.GenSA #' -#' @param object an object of class "optimize.portfolio.GenSA" resulting from a call to optimize.portfolio +#' @param x an object of class \code{optimize.portfolio.GenSA} resulting from a call to \code{\link{optimize.portfolio}} #' @param digits the number of significant digits to use when printing #' @param ... any other passthru parameters #' @export -print.optimize.portfolio.GenSA <- function(object, digits=max(3, getOption("digits")-3), ...){ +print.optimize.portfolio.GenSA <- function(x, ..., digits=max(3, getOption("digits")-3)){ cat(rep("*", 35) ,"\n", sep="") cat("PortfolioAnalytics Optimization\n") cat(rep("*", 35) ,"\n", sep="") - cat("\nCall:\n", paste(deparse(object$call), sep = "\n", collapse = "\n"), + cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") # get optimal weights cat("Optimal Weights:\n") - print.default(object$weights, digits=digits) + print.default(x$weights, digits=digits) cat("\n") # get objective measures - objective_measures <- object$objective_measures + objective_measures <- x$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measures:\n") @@ -385,7 +385,7 @@ tmpl <- objective_measures[[i]][j] cat(names(tmpl), ":\n") tmpv <- unlist(tmpl) - names(tmpv) <- names(object$weights) + names(tmpv) <- names(x$weights) print(tmpv) cat("\n") } @@ -399,25 +399,25 @@ #' #' print method for optimize.portfolio.pso #' -#' @param object an object of class "optimize.portfolio.pso" resulting from a call to optimize.portfolio +#' @param x an object of class \code{optimize.portfolio.pso} resulting from a call to \code{\link{optimize.portfolio}} #' @param digits the number of significant digits to use when printing. #' @param ... any other passthru parameters #' @export -print.optimize.portfolio.pso <- function(object, digits=max(3, getOption("digits")-3), ...){ +print.optimize.portfolio.pso <- function(x, ..., digits=max(3, getOption("digits")-3)){ cat(rep("*", 35) ,"\n", sep="") cat("PortfolioAnalytics Optimization\n") cat(rep("*", 35) ,"\n", sep="") - cat("\nCall:\n", paste(deparse(object$call), sep = "\n", collapse = "\n"), + cat("\nCall:\n", paste(deparse(x$call), sep = "\n", collapse = "\n"), "\n\n", sep = "") # get optimal weights cat("Optimal Weights:\n") - print.default(object$weights, digits=digits) + print.default(x$weights, digits=digits) cat("\n") # get objective measures - objective_measures <- object$objective_measures + objective_measures <- x$objective_measures tmp_obj <- as.numeric(unlist(objective_measures)) names(tmp_obj) <- names(objective_measures) cat("Objective Measures:\n") @@ -430,7 +430,7 @@ tmpl <- objective_measures[[i]][j] cat(names(tmpl), ":\n") tmpv <- unlist(tmpl) - names(tmpv) <- names(object$weights) + names(tmpv) <- names(x$weights) print(tmpv) cat("\n") } Added: pkg/PortfolioAnalytics/man/print.efficient.frontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/print.efficient.frontier.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/print.efficient.frontier.Rd 2013-08-27 18:20:57 UTC (rev 2905) @@ -0,0 +1,21 @@ +\name{print.efficient.frontier} +\alias{print.efficient.frontier} +\title{Print an efficient frontier object} +\usage{ + print.efficient.frontier(x, ...) +} +\arguments{ + \item{x}{objective of class \code{efficient.frontier}} + + \item{...}{passthrough parameters} +} +\description{ + Print method for efficient frontier objects. Display the + call to create or extract the efficient frontier object + and the portfolio from which the efficient frontier was + created or extracted. +} +\author{ + Ross Bennett +} + Added: pkg/PortfolioAnalytics/man/summary.efficient.frontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/summary.efficient.frontier.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/summary.efficient.frontier.Rd 2013-08-27 18:20:57 UTC (rev 2905) @@ -0,0 +1,21 @@ +\name{summary.efficient.frontier} +\alias{summary.efficient.frontier} +\title{Summarize an efficient frontier object} +\usage{ + summary.efficient.frontier(object, ..., digits = 3) +} +\arguments{ + \item{x}{objective of class \code{efficient.frontier}} + + \item{...}{passthrough parameters} +} +\description{ + Summary method for efficient frontier objects. Display + the call to create or extract the efficient frontier + object as well as the weights and risk and return metrics + along the efficient frontier. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Tue Aug 27 20:29:05 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 20:29:05 +0200 (CEST) Subject: [Returnanalytics-commits] r2906 - pkg/PortfolioAnalytics/man Message-ID: <20130827182905.D9875183913@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-27 20:29:05 +0200 (Tue, 27 Aug 2013) New Revision: 2906 Modified: pkg/PortfolioAnalytics/man/print.constraint.Rd pkg/PortfolioAnalytics/man/print.optimize.portfolio.DEoptim.Rd pkg/PortfolioAnalytics/man/print.optimize.portfolio.GenSA.Rd pkg/PortfolioAnalytics/man/print.optimize.portfolio.ROI.Rd pkg/PortfolioAnalytics/man/print.optimize.portfolio.pso.Rd pkg/PortfolioAnalytics/man/print.optimize.portfolio.random.Rd pkg/PortfolioAnalytics/man/print.portfolio.Rd pkg/PortfolioAnalytics/man/summary.portfolio.Rd Log: Cleaning up generic methods Modified: pkg/PortfolioAnalytics/man/print.constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/print.constraint.Rd 2013-08-27 18:20:57 UTC (rev 2905) +++ pkg/PortfolioAnalytics/man/print.constraint.Rd 2013-08-27 18:29:05 UTC (rev 2906) @@ -2,7 +2,7 @@ \alias{print.constraint} \title{print method for objects of class 'constraint'} \usage{ - print.constraint(obj) + print.constraint(x, ...) } \arguments{ \item{portfolio}{object of class constraint} Modified: pkg/PortfolioAnalytics/man/print.optimize.portfolio.DEoptim.Rd =================================================================== --- pkg/PortfolioAnalytics/man/print.optimize.portfolio.DEoptim.Rd 2013-08-27 18:20:57 UTC (rev 2905) +++ pkg/PortfolioAnalytics/man/print.optimize.portfolio.DEoptim.Rd 2013-08-27 18:29:05 UTC (rev 2906) @@ -2,13 +2,13 @@ \alias{print.optimize.portfolio.DEoptim} \title{Printing Output of optimize.portfolio} \usage{ - print.optimize.portfolio.DEoptim(object, - digits = max(3, getOption("digits") - 3), ...) + print.optimize.portfolio.DEoptim(x, ..., + digits = max(3, getOption("digits") - 3)) } \arguments{ - \item{object}{an object of class - "optimize.portfolio.DEoptim" resulting from a call to - optimize.portfolio} + \item{x}{an object of class + \code{optimize.portfolio.DEoptim} resulting from a call + to \code{\link{optimize.portfolio}}} \item{digits}{the number of significant digits to use when printing.} Modified: pkg/PortfolioAnalytics/man/print.optimize.portfolio.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/print.optimize.portfolio.GenSA.Rd 2013-08-27 18:20:57 UTC (rev 2905) +++ pkg/PortfolioAnalytics/man/print.optimize.portfolio.GenSA.Rd 2013-08-27 18:29:05 UTC (rev 2906) @@ -2,13 +2,13 @@ \alias{print.optimize.portfolio.GenSA} \title{Printing Output of optimize.portfolio} \usage{ - print.optimize.portfolio.GenSA(object, - digits = max(3, getOption("digits") - 3), ...) + print.optimize.portfolio.GenSA(x, ..., + digits = max(3, getOption("digits") - 3)) } \arguments{ - \item{object}{an object of class - "optimize.portfolio.GenSA" resulting from a call to - optimize.portfolio} + \item{x}{an object of class + \code{optimize.portfolio.GenSA} resulting from a call to + \code{\link{optimize.portfolio}}} \item{digits}{the number of significant digits to use when printing} Modified: pkg/PortfolioAnalytics/man/print.optimize.portfolio.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/print.optimize.portfolio.ROI.Rd 2013-08-27 18:20:57 UTC (rev 2905) +++ pkg/PortfolioAnalytics/man/print.optimize.portfolio.ROI.Rd 2013-08-27 18:29:05 UTC (rev 2906) @@ -2,12 +2,13 @@ \alias{print.optimize.portfolio.ROI} \title{Printing Output of optimize.portfolio} \usage{ - print.optimize.portfolio.ROI(object, - digits = max(3, getOption("digits") - 3), ...) + print.optimize.portfolio.ROI(x, ..., + digits = max(3, getOption("digits") - 3)) } \arguments{ - \item{object}{an object of class "optimize.portfolio.ROI" - resulting from a call to optimize.portfolio} + \item{x}{an object of class \code{optimize.portfolio.ROI} + resulting from a call to + \code{\link{optimize.portfolio}}} \item{digits}{the number of significant digits to use when printing.} Modified: pkg/PortfolioAnalytics/man/print.optimize.portfolio.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/print.optimize.portfolio.pso.Rd 2013-08-27 18:20:57 UTC (rev 2905) +++ pkg/PortfolioAnalytics/man/print.optimize.portfolio.pso.Rd 2013-08-27 18:29:05 UTC (rev 2906) @@ -2,12 +2,13 @@ \alias{print.optimize.portfolio.pso} \title{Printing Output of optimize.portfolio} \usage{ - print.optimize.portfolio.pso(object, - digits = max(3, getOption("digits") - 3), ...) + print.optimize.portfolio.pso(x, ..., + digits = max(3, getOption("digits") - 3)) } \arguments{ - \item{object}{an object of class "optimize.portfolio.pso" - resulting from a call to optimize.portfolio} + \item{x}{an object of class \code{optimize.portfolio.pso} + resulting from a call to + \code{\link{optimize.portfolio}}} \item{digits}{the number of significant digits to use when printing.} Modified: pkg/PortfolioAnalytics/man/print.optimize.portfolio.random.Rd =================================================================== --- pkg/PortfolioAnalytics/man/print.optimize.portfolio.random.Rd 2013-08-27 18:20:57 UTC (rev 2905) +++ pkg/PortfolioAnalytics/man/print.optimize.portfolio.random.Rd 2013-08-27 18:29:05 UTC (rev 2906) @@ -2,13 +2,13 @@ \alias{print.optimize.portfolio.random} \title{Printing Output of optimize.portfolio} \usage{ - print.optimize.portfolio.random(object, - digits = max(3, getOption("digits") - 3), ...) + print.optimize.portfolio.random(x, ..., + digits = max(3, getOption("digits") - 3)) } \arguments{ - \item{object}{an object of class - "optimize.portfolio.random" resulting from a call to - optimize.portfolio} + \item{x}{an object of class + \code{optimize.portfolio.random} resulting from a call to + \code{\link{optimize.portfolio}}} \item{digits}{the number of significant digits to use when printing.} Modified: pkg/PortfolioAnalytics/man/print.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/print.portfolio.Rd 2013-08-27 18:20:57 UTC (rev 2905) +++ pkg/PortfolioAnalytics/man/print.portfolio.Rd 2013-08-27 18:29:05 UTC (rev 2906) @@ -2,13 +2,14 @@ \alias{print.portfolio} \title{Printing Portfolio Specification Objects} \usage{ - print.portfolio(portfolio) + print.portfolio(x, ...) } \arguments{ - \item{portfolio}{object of class portfolio} + \item{x}{object of class \code{portfolio}} } \description{ - Print method for class "portfolio" + Print method for objects of class \code{portfolio} + created with \code{\link{portfolio.spec}} } \author{ Ross Bennett Modified: pkg/PortfolioAnalytics/man/summary.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/summary.portfolio.Rd 2013-08-27 18:20:57 UTC (rev 2905) +++ pkg/PortfolioAnalytics/man/summary.portfolio.Rd 2013-08-27 18:29:05 UTC (rev 2906) @@ -1,14 +1,15 @@ \name{summary.portfolio} \alias{summary.portfolio} -\title{Summarizing Portfolio Specification Objects} +\title{Summarize Portfolio Specification Objects} \usage{ - summary.portfolio(portfolio) + summary.portfolio(object, ...) } \arguments{ - \item{portfolio}{object of class portfolio} + \item{object}{object of class portfolio} } \description{ - summary method for class "portfolio" + summary method for class \code{portfolio} created with + \code{\link{portfolio.spec}} } \author{ Ross Bennett From noreply at r-forge.r-project.org Tue Aug 27 21:56:32 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 21:56:32 +0200 (CEST) Subject: [Returnanalytics-commits] r2907 - in pkg/PortfolioAnalytics: R man vignettes Message-ID: <20130827195632.62F54185AB7@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-27 21:56:32 +0200 (Tue, 27 Aug 2013) New Revision: 2907 Modified: pkg/PortfolioAnalytics/R/charts.DE.R pkg/PortfolioAnalytics/R/charts.GenSA.R pkg/PortfolioAnalytics/R/charts.PSO.R pkg/PortfolioAnalytics/R/charts.ROI.R pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/R/extractstats.R pkg/PortfolioAnalytics/R/random_portfolios.R pkg/PortfolioAnalytics/man/add.constraint.Rd pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd pkg/PortfolioAnalytics/man/extractWeights.optimize.portfolio.Rd pkg/PortfolioAnalytics/man/extractWeights.optimize.portfolio.rebalancing.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd pkg/PortfolioAnalytics/man/random_portfolios_v1.Rd pkg/PortfolioAnalytics/man/return_constraint.Rd pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw Log: Cleaning up documentation and generic methods Modified: pkg/PortfolioAnalytics/R/charts.DE.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/R/charts.DE.R 2013-08-27 19:56:32 UTC (rev 2907) @@ -338,11 +338,14 @@ #' \code{risk.col},\code{return.col}, and weights columns all properly named. #' @param x set of portfolios created by \code{\link{optimize.portfolio}} #' @param ... any other passthru parameters +#' @param return.col string name of column to use for returns (vertical axis) #' @param risk.col string name of column to use for risk (horizontal axis) -#' @param return.col string name of column to use for returns (vertical axis) +#' @param chart.assets TRUE/FALSE to include risk-return scatter of assets #' @param neighbors set of 'neighbor portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} +#' @param xlim set the limit on coordinates for the x-axis +#' @param ylim set the limit on coordinates for the y-axis #' @export -plot.optimize.portfolio.DEoptim <- function(x, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, xlim=NULL, ylim=NULL, main='optimized portfolio plot') { +plot.optimize.portfolio.DEoptim <- function(x, ..., return.col='mean', risk.col='ES', chart.assets=FALSE, neighbors=NULL, main='optimized portfolio plot', xlim=NULL, ylim=NULL) { charts.DE(DE=x, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...) } Modified: pkg/PortfolioAnalytics/R/charts.GenSA.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/R/charts.GenSA.R 2013-08-27 19:56:32 UTC (rev 2907) @@ -170,18 +170,21 @@ #' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' -#' @param GenSA object created by \code{\link{optimize.portfolio}} +#' @param x object created by \code{\link{optimize.portfolio}} +#' @param ... any other passthru parameters #' @param rp set of weights generated by \code{\link{random_portfolio}} #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis -#' @param ... any other passthru parameters +#' @param chart.assets TRUE/FALSE to include risk-return scatter of assets #' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} #' @param element.color color for the default plot scatter points -#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param neighbors set of 'neighbor' portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} +#' @param xlim set the limit on coordinates for the x-axis +#' @param ylim set the limit on coordinates for the y-axis #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.GenSA <- function(GenSA, rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", xlim=NULL, ylim=NULL, ...){ - charts.GenSA(GenSA=GenSA, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...=...) +plot.optimize.portfolio.GenSA <- function(x, ..., rp=FALSE, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="GenSA.Portfolios", xlim=NULL, ylim=NULL){ + charts.GenSA(GenSA=x, rp=rp, return.col=return.col, risk.col=risk.col, chart.assets=chart.assets, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.PSO.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/R/charts.PSO.R 2013-08-27 19:56:32 UTC (rev 2907) @@ -228,16 +228,19 @@ #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' #' @param pso object created by \code{\link{optimize.portfolio}} +#' @param ... any other passthru parameters #' @param return.col string matching the objective of a 'return' objective, on vertical axis #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis -#' @param ... any other passthru parameters +#' @param chart.assets TRUE/FALSE to include risk-return scatter of assets #' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} #' @param element.color color for the default plot scatter points -#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param neighbors set of 'neighbor' portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} +#' @param xlim set the limit on coordinates for the x-axis +#' @param ylim set the limit on coordinates for the y-axis #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.pso <- function(pso, return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", xlim=NULL, ylim=NULL, ...){ - charts.pso(pso=pso, return.col=return.col, risk.col=risk.col, chart.assets=FALSE, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...=...) +plot.optimize.portfolio.pso <- function(x, ..., return.col="mean", risk.col="ES", chart.assets=FALSE, cex.axis=0.8, element.color="darkgray", neighbors=NULL, main="PSO.Portfolios", xlim=NULL, ylim=NULL){ + charts.pso(pso=x, return.col=return.col, risk.col=risk.col, chart.assets=FALSE, cex.axis=cex.axis, element.color=element.color, neighbors=neighbors, main=main, xlim=xlim, ylim=ylim, ...=...) } Modified: pkg/PortfolioAnalytics/R/charts.ROI.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/R/charts.ROI.R 2013-08-27 19:56:32 UTC (rev 2907) @@ -177,18 +177,21 @@ #' \code{return.col} must be the name of a function used to compute the return metric on the random portfolio weights #' \code{risk.col} must be the name of a function used to compute the risk metric on the random portfolio weights #' -#' @param ROI object created by \code{\link{optimize.portfolio}} +#' @param x object created by \code{\link{optimize.portfolio}} +#' @param ... any other passthru parameters #' @param rp set of weights generated by \code{\link{random_portfolio}} #' @param risk.col string matching the objective of a 'risk' objective, on horizontal axis #' @param return.col string matching the objective of a 'return' objective, on vertical axis -#' @param ... any other passthru parameters +#' @param chart.assets TRUE/FALSE to include risk-return scatter of assets #' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} #' @param element.color color for the default plot scatter points -#' @param neighbors set of 'neighbor' portfolios to overplot +#' @param neighbors set of 'neighbor' portfolios to overplot #' @param main an overall title for the plot: see \code{\link{title}} +#' @param xlim set the limit on coordinates for the x-axis +#' @param ylim set the limit on coordinates for the y-axis #' @seealso \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -plot.optimize.portfolio.ROI <- function(ROI, rp=FALSE, risk.col="ES", return.col="mean", chart.assets=chart.assets, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", xlim=NULL, ylim=NULL, ...){ - charts.ROI(ROI=ROI, rp=rp, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, main=main, xlim=xlim, ylim=ylim, ...) +plot.optimize.portfolio.ROI <- function(x, ..., rp=FALSE, risk.col="ES", return.col="mean", chart.assets=FALSE, element.color="darkgray", neighbors=NULL, main="ROI.Portfolios", xlim=NULL, ylim=NULL){ + charts.ROI(ROI=x, rp=rp, risk.col=risk.col, return.col=return.col, chart.assets=chart.assets, main=main, xlim=xlim, ylim=ylim, ...) } Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-27 19:56:32 UTC (rev 2907) @@ -47,6 +47,7 @@ #' @param rf risk free rate. If \code{rf} is not null, the maximum Sharpe Ratio or modified Sharpe Ratio tangency portfolio will be plotted #' @param cex.legend A numerical value giving the amount by which the legend should be magnified relative to the default. #' @param RAR.text Risk Adjusted Return ratio text to plot in the legend +#' @param chart.assets TRUE/FALSE to include risk-return scatter of assets #' @author Ross Bennett #' @export chart.EfficientFrontier <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ @@ -356,7 +357,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.efficient.frontier <- function(object, chart.assets=TRUE, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="Modified Sharpe", rf=0, cex.legend=0.8){ +chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="Modified Sharpe", rf=0, chart.assets=TRUE, cex.legend=0.8){ if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") # get the returns and efficient frontier object Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-27 19:56:32 UTC (rev 2907) @@ -236,7 +236,7 @@ #' # Add box constraints #' pspec <- add.constraint(portfolio=pspec, type="box", min=0.05, max=0.4) #' -#' min and max can also be specified per asset +#' # min and max can also be specified per asset #' pspec <- add.constraint(portfolio=pspec, type="box", min=c(0.05, 0, 0.08, 0.1), max=c(0.4, 0.3, 0.7, 0.55)) #' # A special case of box constraints is long only where min=0 and max=1 #' # The default action is long only if min and max are not specified @@ -244,7 +244,7 @@ #' pspec <- add.constraint(portfolio=pspec, type="long_only") #' #' # Add group constraints -#' pspec <- add.constraint(portfolio=pspec, type="group", groups=c(3, 1), group_min=c(0.1, 0.15), group_max=c(0.85, 0.55), group_labels=c("GroupA", "GroupB"), group_pos=c(2, 1)) +#' pspec <- add.constraint(portfolio=pspec, type="group", groups=list(c(1, 2, 1), 4), group_min=c(0.1, 0.15), group_max=c(0.85, 0.55), group_labels=c("GroupA", "GroupB"), group_pos=c(2, 1)) #' #' # Add position limit constraint such that we have a maximum number of three assets with non-zero weights. #' pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos=3) @@ -820,7 +820,7 @@ #' #' pspec <- portfolio.spec(assets=colnames(ret)) #' -#' pspec <- add.constraint(portfolio=pspec, type="return", div_target=mean(colMeans(ret))) +#' pspec <- add.constraint(portfolio=pspec, type="return", return_target=mean(colMeans(ret))) #' @export return_constraint <- function(type="return", return_target, enabled=TRUE, message=FALSE, ...){ Constraint <- constraint_v2(type, enabled=enabled, constrclass="return_constraint", ...) Modified: pkg/PortfolioAnalytics/R/extractstats.R =================================================================== --- pkg/PortfolioAnalytics/R/extractstats.R 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/R/extractstats.R 2013-08-27 19:56:32 UTC (rev 2907) @@ -173,12 +173,13 @@ #' extract weights from output of optimize.portfolio #' -#' @param object object of type optimize.portfolio to extract weights from +#' @param object object of class \code{optimize.portfolio} to extract weights from +#' @param ... passthrough parameters. Not currently used #' @seealso #' \code{\link{optimize.portfolio}} #' @author Ross Bennett #' @export -extractWeights.optimize.portfolio <- function(object){ +extractWeights.optimize.portfolio <- function(object, ...){ if(!inherits(object, "optimize.portfolio")){ stop("object must be of class 'optimize.portfolio'") } @@ -192,28 +193,28 @@ #' #' The output list is indexed by the dates of the rebalancing periods, as determined by \code{endpoints} #' -#' @param RebalResults object of type optimize.portfolio.rebalancing to extract weights from +#' @param object object of class \code{optimize.portfolio.rebalancing} to extract weights from #' @param ... any other passthru parameters #' @seealso #' \code{\link{optimize.portfolio.rebalancing}} #' @export -extractWeights.optimize.portfolio.rebalancing <- function(RebalResults, ...){ +extractWeights.optimize.portfolio.rebalancing <- function(object, ...){ # @TODO: add a class check for the input object # FIXED - if(!inherits(RebalResults, "optimize.portfolio.rebalancing")){ + if(!inherits(object, "optimize.portfolio.rebalancing")){ stop("Object passed in must be of class 'optimize.portfolio.rebalancing'") } - numColumns = length(RebalResults[[1]]$weights) - numRows = length(RebalResults) + numColumns = length(object[[1]]$weights) + numRows = length(object) result <- matrix(nrow=numRows, ncol=numColumns) for(i in 1:numRows) - result[i,] = unlist(RebalResults[[i]]$weights) + result[i,] = unlist(object[[i]]$weights) - colnames(result) = names(unlist(RebalResults[[1]]$weights)) - rownames(result) = names(RebalResults) + colnames(result) = names(unlist(object[[1]]$weights)) + rownames(result) = names(object) result = as.xts(result) return(result) } Modified: pkg/PortfolioAnalytics/R/random_portfolios.R =================================================================== --- pkg/PortfolioAnalytics/R/random_portfolios.R 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/R/random_portfolios.R 2013-08-27 19:56:32 UTC (rev 2907) @@ -171,7 +171,7 @@ #' @export #' @examples #' rpconstraint<-constraint(assets=10, min_mult=-Inf, max_mult=Inf, min_sum=.99, max_sum=1.01, min=.01, max=.4, weight_seq=generatesequence()) -#' rp<- random_portfolios(rpconstraints=rpconstraint,permutations=1000) +#' rp<- random_portfolios_v1(rpconstraints=rpconstraint,permutations=1000) #' head(rp) random_portfolios_v1 <- function (rpconstraints,permutations=100,...) { # Modified: pkg/PortfolioAnalytics/man/add.constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/add.constraint.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/add.constraint.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -81,7 +81,7 @@ # Add box constraints pspec <- add.constraint(portfolio=pspec, type="box", min=0.05, max=0.4) -min and max can also be specified per asset +# min and max can also be specified per asset pspec <- add.constraint(portfolio=pspec, type="box", min=c(0.05, 0, 0.08, 0.1), max=c(0.4, 0.3, 0.7, 0.55)) # A special case of box constraints is long only where min=0 and max=1 # The default action is long only if min and max are not specified @@ -89,7 +89,7 @@ pspec <- add.constraint(portfolio=pspec, type="long_only") # Add group constraints -pspec <- add.constraint(portfolio=pspec, type="group", groups=c(3, 1), group_min=c(0.1, 0.15), group_max=c(0.85, 0.55), group_labels=c("GroupA", "GroupB"), group_pos=c(2, 1)) +pspec <- add.constraint(portfolio=pspec, type="group", groups=list(c(1, 2, 1), 4), group_min=c(0.1, 0.15), group_max=c(0.85, 0.55), group_labels=c("GroupA", "GroupB"), group_pos=c(2, 1)) # Add position limit constraint such that we have a maximum number of three assets with non-zero weights. pspec <- add.constraint(portfolio=pspec, type="position_limit", max_pos=3) Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -25,11 +25,12 @@ RAR.text = "Modified Sharpe", rf = 0, cex.legend = 0.8) chart.EfficientFrontier.efficient.frontier(object, - chart.assets = TRUE, match.col = "ES", - n.portfolios = NULL, xlim = NULL, ylim = NULL, - cex.axis = 0.8, element.color = "darkgray", + match.col = "ES", n.portfolios = NULL, xlim = NULL, + ylim = NULL, cex.axis = 0.8, + element.color = "darkgray", main = "Efficient Frontier", ..., - RAR.text = "Modified Sharpe", rf = 0, cex.legend = 0.8) + RAR.text = "Modified Sharpe", rf = 0, + chart.assets = TRUE, cex.legend = 0.8) } \arguments{ \item{object}{optimal portfolio created by @@ -72,6 +73,9 @@ \item{RAR.text}{Risk Adjusted Return ratio text to plot in the legend} + + \item{chart.assets}{TRUE/FALSE to include risk-return + scatter of assets} } \description{ This function charts the efficient frontier and Modified: pkg/PortfolioAnalytics/man/extractWeights.optimize.portfolio.Rd =================================================================== --- pkg/PortfolioAnalytics/man/extractWeights.optimize.portfolio.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/extractWeights.optimize.portfolio.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -2,11 +2,13 @@ \alias{extractWeights.optimize.portfolio} \title{extract weights from output of optimize.portfolio} \usage{ - extractWeights.optimize.portfolio(object) + extractWeights.optimize.portfolio(object, ...) } \arguments{ - \item{object}{object of type optimize.portfolio to - extract weights from} + \item{object}{object of class \code{optimize.portfolio} + to extract weights from} + + \item{...}{passthrough parameters. Not currently used} } \description{ extract weights from output of optimize.portfolio Modified: pkg/PortfolioAnalytics/man/extractWeights.optimize.portfolio.rebalancing.Rd =================================================================== --- pkg/PortfolioAnalytics/man/extractWeights.optimize.portfolio.rebalancing.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/extractWeights.optimize.portfolio.rebalancing.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -2,12 +2,13 @@ \alias{extractWeights.optimize.portfolio.rebalancing} \title{extract time series of weights from output of optimize.portfolio.rebalancing} \usage{ - extractWeights.optimize.portfolio.rebalancing(RebalResults, + extractWeights.optimize.portfolio.rebalancing(object, ...) } \arguments{ - \item{RebalResults}{object of type - optimize.portfolio.rebalancing to extract weights from} + \item{object}{object of class + \code{optimize.portfolio.rebalancing} to extract weights + from} \item{...}{any other passthru parameters} } Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.DEoptim.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -4,8 +4,9 @@ \usage{ plot.optimize.portfolio.DEoptim(x, ..., return.col = "mean", risk.col = "ES", - chart.assets = FALSE, neighbors = NULL, xlim = NULL, - ylim = NULL, main = "optimized portfolio plot") + chart.assets = FALSE, neighbors = NULL, + main = "optimized portfolio plot", xlim = NULL, + ylim = NULL) } \arguments{ \item{x}{set of portfolios created by @@ -13,16 +14,23 @@ \item{...}{any other passthru parameters} + \item{return.col}{string name of column to use for + returns (vertical axis)} + \item{risk.col}{string name of column to use for risk (horizontal axis)} - \item{return.col}{string name of column to use for - returns (vertical axis)} + \item{chart.assets}{TRUE/FALSE to include risk-return + scatter of assets} \item{neighbors}{set of 'neighbor portfolios to overplot} \item{main}{an overall title for the plot: see \code{\link{title}}} + + \item{xlim}{set the limit on coordinates for the x-axis} + + \item{ylim}{set the limit on coordinates for the y-axis} } \description{ scatter and weights chart for DEoptim portfolio Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.GenSA.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -2,17 +2,18 @@ \alias{plot.optimize.portfolio.GenSA} \title{scatter and weights chart for portfolios} \usage{ - plot.optimize.portfolio.GenSA(GenSA, rp = FALSE, + plot.optimize.portfolio.GenSA(x, ..., rp = FALSE, return.col = "mean", risk.col = "ES", chart.assets = FALSE, cex.axis = 0.8, element.color = "darkgray", neighbors = NULL, - main = "GenSA.Portfolios", xlim = NULL, ylim = NULL, - ...) + main = "GenSA.Portfolios", xlim = NULL, ylim = NULL) } \arguments{ - \item{GenSA}{object created by + \item{x}{object created by \code{\link{optimize.portfolio}}} + \item{...}{any other passthru parameters} + \item{rp}{set of weights generated by \code{\link{random_portfolio}}} @@ -22,7 +23,8 @@ \item{risk.col}{string matching the objective of a 'risk' objective, on horizontal axis} - \item{...}{any other passthru parameters} + \item{chart.assets}{TRUE/FALSE to include risk-return + scatter of assets} \item{cex.axis}{The magnification to be used for axis annotation relative to the current setting of \code{cex}} @@ -35,6 +37,10 @@ \item{main}{an overall title for the plot: see \code{\link{title}}} + + \item{xlim}{set the limit on coordinates for the x-axis} + + \item{ylim}{set the limit on coordinates for the y-axis} } \description{ \code{return.col} must be the name of a function used to Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.ROI.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -2,16 +2,18 @@ \alias{plot.optimize.portfolio.ROI} \title{scatter and weights chart for portfolios} \usage{ - plot.optimize.portfolio.ROI(ROI, rp = FALSE, + plot.optimize.portfolio.ROI(x, ..., rp = FALSE, risk.col = "ES", return.col = "mean", - chart.assets = chart.assets, - element.color = "darkgray", neighbors = NULL, - main = "ROI.Portfolios", xlim = NULL, ylim = NULL, ...) + chart.assets = FALSE, element.color = "darkgray", + neighbors = NULL, main = "ROI.Portfolios", xlim = NULL, + ylim = NULL) } \arguments{ - \item{ROI}{object created by + \item{x}{object created by \code{\link{optimize.portfolio}}} + \item{...}{any other passthru parameters} + \item{rp}{set of weights generated by \code{\link{random_portfolio}}} @@ -21,7 +23,8 @@ \item{return.col}{string matching the objective of a 'return' objective, on vertical axis} - \item{...}{any other passthru parameters} + \item{chart.assets}{TRUE/FALSE to include risk-return + scatter of assets} \item{cex.axis}{The magnification to be used for axis annotation relative to the current setting of \code{cex}} @@ -34,6 +37,10 @@ \item{main}{an overall title for the plot: see \code{\link{title}}} + + \item{xlim}{set the limit on coordinates for the x-axis} + + \item{ylim}{set the limit on coordinates for the y-axis} } \description{ The ROI optimizers do not store the portfolio weights Modified: pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd =================================================================== --- pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/plot.optimize.portfolio.pso.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -2,22 +2,25 @@ \alias{plot.optimize.portfolio.pso} \title{scatter and weights chart for portfolios} \usage{ - plot.optimize.portfolio.pso(pso, return.col = "mean", + plot.optimize.portfolio.pso(x, ..., return.col = "mean", risk.col = "ES", chart.assets = FALSE, cex.axis = 0.8, element.color = "darkgray", neighbors = NULL, - main = "PSO.Portfolios", xlim = NULL, ylim = NULL, ...) + main = "PSO.Portfolios", xlim = NULL, ylim = NULL) } \arguments{ \item{pso}{object created by \code{\link{optimize.portfolio}}} + \item{...}{any other passthru parameters} + \item{return.col}{string matching the objective of a 'return' objective, on vertical axis} \item{risk.col}{string matching the objective of a 'risk' objective, on horizontal axis} - \item{...}{any other passthru parameters} + \item{chart.assets}{TRUE/FALSE to include risk-return + scatter of assets} \item{cex.axis}{The magnification to be used for axis annotation relative to the current setting of \code{cex}} @@ -30,6 +33,10 @@ \item{main}{an overall title for the plot: see \code{\link{title}}} + + \item{xlim}{set the limit on coordinates for the x-axis} + + \item{ylim}{set the limit on coordinates for the y-axis} } \description{ \code{return.col} must be the name of a function used to Modified: pkg/PortfolioAnalytics/man/random_portfolios_v1.Rd =================================================================== --- pkg/PortfolioAnalytics/man/random_portfolios_v1.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/random_portfolios_v1.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -25,7 +25,7 @@ } \examples{ rpconstraint<-constraint(assets=10, min_mult=-Inf, max_mult=Inf, min_sum=.99, max_sum=1.01, min=.01, max=.4, weight_seq=generatesequence()) -rp<- random_portfolios(rpconstraints=rpconstraint,permutations=1000) +rp<- random_portfolios_v1(rpconstraints=rpconstraint,permutations=1000) head(rp) } \author{ Modified: pkg/PortfolioAnalytics/man/return_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/return_constraint.Rd 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/man/return_constraint.Rd 2013-08-27 19:56:32 UTC (rev 2907) @@ -28,7 +28,7 @@ pspec <- portfolio.spec(assets=colnames(ret)) -pspec <- add.constraint(portfolio=pspec, type="return", div_target=mean(colMeans(ret))) +pspec <- add.constraint(portfolio=pspec, type="return", return_target=mean(colMeans(ret))) } \author{ Ross Bennett Modified: pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw =================================================================== --- pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw 2013-08-27 18:29:05 UTC (rev 2906) +++ pkg/PortfolioAnalytics/vignettes/portfolio_vignette.Rnw 2013-08-27 19:56:32 UTC (rev 2907) @@ -156,7 +156,9 @@ box_constr <- box_constraint(assets=pspec$assets, min=0, max=1) # group constraint -group_constr <- group_constraint(assets=pspec$assets, groups=c(3, 1), +group_constr <- group_constraint(assets=pspec$assets, + groups=list(c(1, 2, 3), + 4), group_min=c(0.1, 0.15), group_max=c(0.85, 0.55), group_labels=c("GroupA", "GroupB")) From noreply at r-forge.r-project.org Tue Aug 27 22:33:59 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 22:33:59 +0200 (CEST) Subject: [Returnanalytics-commits] r2908 - in pkg/FactorAnalytics: R man Message-ID: <20130827203359.EBAA0183D86@r-forge.r-project.org> Author: chenyian Date: 2013-08-27 22:33:59 +0200 (Tue, 27 Aug 2013) New Revision: 2908 Added: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r pkg/FactorAnalytics/man/factorModelPerformanceAttribution.Rd Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R Log: add function factorModelPerformanceAttribution.r and its .Rd file. Added: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r =================================================================== --- pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r (rev 0) +++ pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-27 20:33:59 UTC (rev 2908) @@ -0,0 +1,253 @@ +# performance attribution +# Yi-An Chen +# July 30, 2012 + + + +#' Compute BARRA-type performance attribution +#' +#' Decompose total returns or active returns into returns attributed to factors +#' and specific returns. Class of FM.attribution is generated and generic +#' function \code{plot()} and \code{summary()},\code{print()} can be used. +#' +#' total returns can be decomposed into returns attributed to factors and +#' specific returns. \eqn{R_t = \sum_j b_{jt} * f_{jt} + +#' u_t},t=1..T,\eqn{b_{jt}} is exposure to factor j and \eqn{f_{jt}} is factor +#' j. The returns attributed to factor j is \eqn{b_{jt} * f_{jt}} and portfolio +#' specific returns is \eqn{u_t} +#' +#' @param fit Class of "TimeSeriesFactorModel", "FundamentalFactorModel" or +#' "statFactorModel". +#' @param benchmark a xts, vector or data.frame provides benchmark time series +#' returns. +#' @param ... Other controled variables for fit methods. +#' @return an object of class \code{FM.attribution} containing +#' \itemize{ +#' \item{cum.ret.attr.f} N X J matrix of cumulative return attributed to +#' factors. +#' \item{cum.spec.ret} 1 x N vector of cumulative specific returns. +#' \item{attr.list} list of time series of attributed returns for every +#' portfolio. +#' } +#' @author Yi-An Chen. +#' @references Grinold,R and Kahn R, \emph{Active Portfolio Management}, +#' McGraw-Hill. +#' @examples +#' +#' +#' data(managers.df) +#' fit.ts <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), +#' factors.names=c("EDHEC.LS.EQ","SP500.TR"), +#' data=managers.df,fit.method="OLS") +#' # withoud benchmark +#' fm.attr <- factorModelPerformanceAttribution(fit.ts) +#' +#' +#' +factorModelPerformanceAttribution <- + function(fit,benchmark=NULL,...) { + + # input + # fit : Class of MacroFactorModel, FundamentalFactorModel and statFactorModel + # benchmark: benchmark returns, default is NULL. If benchmark is provided, active returns + # is used. + # ... : controlled variables for fitMacroeconomicsFactorModel and fitStatisticalFactorModel + # output + # class of "FMattribution" + # + # plot.FMattribution + # summary.FMattribution + # print.FMattribution + require(xts) + + if (class(fit) !="TimeSeriesFactorModel" & class(fit) !="FundamentalFactorModel" + & class(fit) != "StatFactorModel") + { + stop("Class has to be either 'TimeSeriesFactorModel', 'FundamentalFactorModel' or + 'StatFactorModel'.") + } + + # TimeSeriesFactorModel chunk + + if (class(fit) == "TimeSeriesFactorModel") { + + # if benchmark is provided + +# if (!is.null(benchmark)) { +# ret.assets = fit$ret.assets - benchmark +# fit = fitTimeSeriesFactorModel(ret.assets=ret.assets,...) +# } +# return attributed to factors + cum.attr.ret <- fit$beta + cum.spec.ret <- fit$alpha + factorName = colnames(fit$beta) + fundName = rownames(fit$beta) + + attr.list <- list() + + for (k in fundName) { + fit.lm = fit$asset.fit[[k]] + + ## extract information from lm object + date <- names(fitted(fit.lm)) + + actual.xts = xts(fit.lm$model[1], as.Date(date)) + + +# attributed returns +# active portfolio management p.512 17A.9 + + cum.ret <- Return.cumulative(actual.xts) + # setup initial value + attr.ret.xts.all <- xts(, as.Date(date)) + for ( i in factorName ) { + + if (fit$beta[k,i]==0) { + cum.attr.ret[k,i] <- 0 + attr.ret.xts.all <- merge(attr.ret.xts.all,xts(rep(0,length(date)),as.Date(date))) + } else { + attr.ret.xts <- actual.xts - xts(as.matrix(fit.lm$model[i])%*%as.matrix(fit.lm$coef[i]), + as.Date(date)) + cum.attr.ret[k,i] <- cum.ret - Return.cumulative(actual.xts-attr.ret.xts) + attr.ret.xts.all <- merge(attr.ret.xts.all,attr.ret.xts) + } + + } + + # specific returns + spec.ret.xts <- actual.xts - xts(as.matrix(fit.lm$model[,-1])%*%as.matrix(fit.lm$coef[-1]), + as.Date(date)) + cum.spec.ret[k] <- cum.ret - Return.cumulative(actual.xts-spec.ret.xts) + attr.list[[k]] <- merge(attr.ret.xts.all,spec.ret.xts) + colnames(attr.list[[k]]) <- c(factorName,"specific.returns") + } + + + } + +if (class(fit) =="FundamentalFactorModel" ) { + # if benchmark is provided + + if (!is.null(benchmark)) { + stop("use fitFundamentalFactorModel instead") + } + # return attributed to factors + factor.returns <- fit$factor.returns[,-1] + factor.names <- fit$exposure.names + dates <- index(factor.returns) + ticker <- fit$asset.names + + + + #cumulative return attributed to factors + cum.attr.ret <- matrix(,nrow=length(ticker),ncol=length(factor.names), + dimnames=list(ticker,factor.names)) + cum.spec.ret <- rep(0,length(ticker)) + names(cum.spec.ret) <- ticker + + # make list of every asstes and every list contains return attributed to factors + # and specific returns + + attr.list <- list() + for (k in ticker) { + idx <- which(fit$data[,assetvar]== k) + returns <- fit$data[idx,returnsvar] + attr.factor <- fit$data[idx,factor.names] * coredata(factor.returns) + specific.returns <- returns - apply(attr.factor,1,sum) + attr <- cbind(returns,attr.factor,specific.returns) + attr.list[[k]] <- xts(attr,as.Date(dates)) + cum.attr.ret[k,] <- apply(attr.factor,2,Return.cumulative) + cum.spec.ret[k] <- Return.cumulative(specific.returns) + } + + + +} + + if (class(fit) == "StatFactorModel") { + + # if benchmark is provided + + if (!is.null(benchmark)) { + x = fit$asset.ret - benchmark + fit = fitStatisticalFactorModel(data=x,...) + } + # return attributed to factors + cum.attr.ret <- t(fit$loadings) + cum.spec.ret <- fit$r2 + factorName = rownames(fit$loadings) + fundName = colnames(fit$loadings) + + # create list for attribution + attr.list <- list() + # pca method + + if ( dim(fit$asset.ret)[1] > dim(fit$asset.ret)[2] ) { + + + for (k in fundName) { + fit.lm = fit$asset.fit[[k]] + + ## extract information from lm object + date <- names(fitted(fit.lm)) + # probably needs more general Date setting + actual.xts = xts(fit.lm$model[1], as.Date(date)) + + + # attributed returns + # active portfolio management p.512 17A.9 + + cum.ret <- Return.cumulative(actual.xts) + # setup initial value + attr.ret.xts.all <- xts(, as.Date(date)) + for ( i in factorName ) { + + attr.ret.xts <- actual.xts - xts(as.matrix(fit.lm$model[i])%*%as.matrix(fit.lm$coef[i]), + as.Date(date)) + cum.attr.ret[k,i] <- cum.ret - Return.cumulative(actual.xts-attr.ret.xts) + attr.ret.xts.all <- merge(attr.ret.xts.all,attr.ret.xts) + + + } + + # specific returns + spec.ret.xts <- actual.xts - xts(as.matrix(fit.lm$model[,-1])%*%as.matrix(fit.lm$coef[-1]), + as.Date(date)) + cum.spec.ret[k] <- cum.ret - Return.cumulative(actual.xts-spec.ret.xts) + attr.list[[k]] <- merge(attr.ret.xts.all,spec.ret.xts) + colnames(attr.list[[k]]) <- c(factorName,"specific.returns") + } + } else { + # apca method +# fit$loadings # f X K +# fit$factors # T X f + + dates <- index(fit$factors) + for ( k in fundName) { + attr.ret.xts.all <- xts(, as.Date(dates)) + actual.xts <- xts(fit$asset.ret[,k],as.Date(dates)) + cum.ret <- Return.cumulative(actual.xts) + for (i in factorName) { + attr.ret.xts <- xts(fit$factors[,i] * fit$loadings[i,k], as.Date(dates) ) + attr.ret.xts.all <- merge(attr.ret.xts.all,attr.ret.xts) + cum.attr.ret[k,i] <- cum.ret - Return.cumulative(actual.xts-attr.ret.xts) + } + spec.ret.xts <- actual.xts - xts(fit$factors%*%fit$loadings[,k],as.Date(dates)) + cum.spec.ret[k] <- cum.ret - Return.cumulative(actual.xts-spec.ret.xts) + attr.list[[k]] <- merge(attr.ret.xts.all,spec.ret.xts) + colnames(attr.list[[k]]) <- c(factorName,"specific.returns") + } + + + } + + } + + + + ans = list(cum.ret.attr.f=cum.attr.ret, + cum.spec.ret=cum.spec.ret, + attr.list=attr.list) +class(ans) = "FM.attribution" +return(ans) + } Modified: pkg/FactorAnalytics/R/fitFundamentalFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-27 19:56:32 UTC (rev 2907) +++ pkg/FactorAnalytics/R/fitFundamentalFactorModel.R 2013-08-27 20:33:59 UTC (rev 2908) @@ -443,6 +443,7 @@ # change names for intercept colnames(f.hat)[1] <- "Intercept" + output <- list(returns.cov = Cov.returns, factor.cov = Cov.factors, resids.cov = Cov.resids, Added: pkg/FactorAnalytics/man/factorModelPerformanceAttribution.Rd =================================================================== --- pkg/FactorAnalytics/man/factorModelPerformanceAttribution.Rd (rev 0) +++ pkg/FactorAnalytics/man/factorModelPerformanceAttribution.Rd 2013-08-27 20:33:59 UTC (rev 2908) @@ -0,0 +1,55 @@ +\name{factorModelPerformanceAttribution} +\alias{factorModelPerformanceAttribution} +\title{Compute BARRA-type performance attribution} +\usage{ + factorModelPerformanceAttribution(fit, benchmark = NULL, + ...) +} +\arguments{ + \item{fit}{Class of "TimeSeriesFactorModel", + "FundamentalFactorModel" or "statFactorModel".} + + \item{benchmark}{a xts, vector or data.frame provides + benchmark time series returns.} + + \item{...}{Other controled variables for fit methods.} +} +\value{ + an object of class \code{FM.attribution} containing + \itemize{ \item{cum.ret.attr.f} N X J matrix of + cumulative return attributed to factors. + \item{cum.spec.ret} 1 x N vector of cumulative specific + returns. \item{attr.list} list of time series of + attributed returns for every portfolio. } +} +\description{ + Decompose total returns or active returns into returns + attributed to factors and specific returns. Class of + FM.attribution is generated and generic function + \code{plot()} and \code{summary()},\code{print()} can be + used. +} +\details{ + total returns can be decomposed into returns attributed + to factors and specific returns. \eqn{R_t = \sum_j b_{jt} + * f_{jt} + u_t},t=1..T,\eqn{b_{jt}} is exposure to factor + j and \eqn{f_{jt}} is factor j. The returns attributed to + factor j is \eqn{b_{jt} * f_{jt}} and portfolio specific + returns is \eqn{u_t} +} +\examples{ +data(managers.df) +fit.ts <- fitTimeSeriesFactorModel(assets.names=colnames(managers.df[,(1:6)]), + factors.names=c("EDHEC.LS.EQ","SP500.TR"), + data=managers.df,fit.method="OLS") +# withoud benchmark +fm.attr <- factorModelPerformanceAttribution(fit.ts) +} +\author{ + Yi-An Chen. +} +\references{ + Grinold,R and Kahn R, \emph{Active Portfolio Management}, + McGraw-Hill. +} + From noreply at r-forge.r-project.org Tue Aug 27 22:36:51 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Tue, 27 Aug 2013 22:36:51 +0200 (CEST) Subject: [Returnanalytics-commits] r2909 - in pkg/FactorAnalytics: . R Message-ID: <20130827203652.0A6B0183D86@r-forge.r-project.org> Author: chenyian Date: 2013-08-27 22:36:51 +0200 (Tue, 27 Aug 2013) New Revision: 2909 Modified: pkg/FactorAnalytics/NAMESPACE pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r Log: export the function Modified: pkg/FactorAnalytics/NAMESPACE =================================================================== --- pkg/FactorAnalytics/NAMESPACE 2013-08-27 20:33:59 UTC (rev 2908) +++ pkg/FactorAnalytics/NAMESPACE 2013-08-27 20:36:51 UTC (rev 2909) @@ -1,3 +1,4 @@ +export(factorModelPerformanceAttribution) export(dCornishFisher) export(factorModelCovariance) export(factorModelEsDecomposition) Modified: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r =================================================================== --- pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-27 20:33:59 UTC (rev 2908) +++ pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-27 20:36:51 UTC (rev 2909) @@ -32,6 +32,7 @@ #' @author Yi-An Chen. #' @references Grinold,R and Kahn R, \emph{Active Portfolio Management}, #' McGraw-Hill. +#' @export #' @examples #' #' From noreply at r-forge.r-project.org Wed Aug 28 00:22:41 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 00:22:41 +0200 (CEST) Subject: [Returnanalytics-commits] r2910 - in pkg/FactorAnalytics: R man Message-ID: <20130827222241.16B4B185D1F@r-forge.r-project.org> Author: chenyian Date: 2013-08-28 00:22:40 +0200 (Wed, 28 Aug 2013) New Revision: 2910 Modified: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd Log: debug Modified: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r =================================================================== --- pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-27 20:36:51 UTC (rev 2909) +++ pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-27 22:22:40 UTC (rev 2910) @@ -75,10 +75,11 @@ # if benchmark is provided # if (!is.null(benchmark)) { -# ret.assets = fit$ret.assets - benchmark +# ret.assets = fit$data[] - benchmark # fit = fitTimeSeriesFactorModel(ret.assets=ret.assets,...) # } -# return attributed to factors + + # return attributed to factors cum.attr.ret <- fit$beta cum.spec.ret <- fit$alpha factorName = colnames(fit$beta) Modified: pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R =================================================================== --- pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-27 20:36:51 UTC (rev 2909) +++ pkg/FactorAnalytics/R/fitTimeSeriesFactorModel.R 2013-08-27 22:22:40 UTC (rev 2910) @@ -53,10 +53,15 @@ #' \item{r2} {N x 1 Vector of R-square values.} #' \item{resid.variance} {N x 1 Vector of residual variances.} #' \item{call} {function call.} +#' \item{data} original data as input +#' \item{factors.names} factors.names as input +#' \item{variable.selection} variable.selection as input +#' \item{assets.names} asset.names as input #' } #' #' -#' interpreted as number +#' +#' #' @author Eric Zivot and Yi-An Chen. #' @references #' \enumerate{ Modified: pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd =================================================================== --- pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd 2013-08-27 20:36:51 UTC (rev 2909) +++ pkg/FactorAnalytics/man/fitTimeseriesFactorModel.Rd 2013-08-27 22:22:40 UTC (rev 2910) @@ -81,9 +81,10 @@ \item{beta} {N x K Matrix of estimated betas.} \item{r2} {N x 1 Vector of R-square values.} \item{resid.variance} {N x 1 Vector of residual variances.} \item{call} - {function call.} } - - interpreted as number + {function call.} \item{data} original data as input + \item{factors.names} factors.names as input + \item{variable.selection} variable.selection as input + \item{assets.names} asset.names as input } } \description{ Fit time series factor model by time series regression From noreply at r-forge.r-project.org Wed Aug 28 01:19:50 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 01:19:50 +0200 (CEST) Subject: [Returnanalytics-commits] r2911 - in pkg/FactorAnalytics: R vignettes Message-ID: <20130827231950.D32CA183D86@r-forge.r-project.org> Author: chenyian Date: 2013-08-28 01:19:50 +0200 (Wed, 28 Aug 2013) New Revision: 2911 Modified: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw Log: add section performance attribution in vignettes Modified: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r =================================================================== --- pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-27 22:22:40 UTC (rev 2910) +++ pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-27 23:19:50 UTC (rev 2911) @@ -91,7 +91,7 @@ fit.lm = fit$asset.fit[[k]] ## extract information from lm object - date <- names(fitted(fit.lm)) + date <- index(fit$data[,k]) actual.xts = xts(fit.lm$model[1], as.Date(date)) @@ -191,7 +191,7 @@ fit.lm = fit$asset.fit[[k]] ## extract information from lm object - date <- names(fitted(fit.lm)) + date <- index(fit$data[,k]) # probably needs more general Date setting actual.xts = xts(fit.lm$model[1], as.Date(date)) Modified: pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw =================================================================== --- pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-27 22:22:40 UTC (rev 2910) +++ pkg/FactorAnalytics/vignettes/fundamentalFM.Rnw 2013-08-27 23:19:50 UTC (rev 2911) @@ -376,4 +376,20 @@ @ +\section{Performance Attribution} + +User can do factor-based performance attribution with \verb at factorAnalytics@ package. factor model: +\begin{equation} +r_t = \alpha + Bf_t + e_t,\;t=1 \cdots T +\end{equation} +can break down asset returns into two pieces. The first term is \emph{returns atttributed to factors} $Bf_t$ and the second term is called \emph{specific returns} which is simply $\alpha + e_t$. + +For example, let's breakdown time series factor model we usded previously. Function \verb at factorModelPerformanceAttribution()@ can help us to calculate performance attribution. +<>= +ts.attr <- factorModelPerformanceAttribution(fit.time) +names(ts.attr) +@ +There are 3 items generated by the function. \verb at cum.ret.attr.f@ will return a N x K matrix with cummulative returns attributed to factors. \verb at cum.spec.ret@ will return a N x 1 matrix with cummulative specific returns. \verb at attr.list@ will return a list which contains returns atttribution to each factors and specific returns asset by asset. + + \end{document} \ No newline at end of file From noreply at r-forge.r-project.org Wed Aug 28 02:16:36 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 02:16:36 +0200 (CEST) Subject: [Returnanalytics-commits] r2912 - pkg/FactorAnalytics/R Message-ID: <20130828001637.001BE185ADE@r-forge.r-project.org> Author: chenyian Date: 2013-08-28 02:16:36 +0200 (Wed, 28 Aug 2013) New Revision: 2912 Modified: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r Log: modify industry component model compatibility with factorModelPerformanceAttribution.r Modified: pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r =================================================================== --- pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-27 23:19:50 UTC (rev 2911) +++ pkg/FactorAnalytics/R/factorModelPerformanceAttribution.r 2013-08-28 00:16:36 UTC (rev 2912) @@ -135,7 +135,7 @@ } # return attributed to factors factor.returns <- fit$factor.returns[,-1] - factor.names <- fit$exposure.names + factor.names <- colnames(fit$beta) dates <- index(factor.returns) ticker <- fit$asset.names @@ -152,9 +152,17 @@ attr.list <- list() for (k in ticker) { - idx <- which(fit$data[,assetvar]== k) - returns <- fit$data[idx,returnsvar] - attr.factor <- fit$data[idx,factor.names] * coredata(factor.returns) + idx <- which(fit$data[,fit$assetvar]== k) + returns <- fit$data[idx,fit$returnsvar] + num.f.names <- intersect(fit$exposure.names,factor.names) + # check if there is industry factors + if (length(setdiff(fit$exposure.names,factor.names))>0 ){ + ind.f <- matrix(rep(fit$beta[k,][-(1:length(num.f.names))],length(idx)),nrow=length(idx),byrow=TRUE) + colnames(ind.f) <- colnames(fit$beta)[-(1:length(num.f.names))] + exposure <- cbind(fit$data[idx,num.f.names],ind.f) + } else {exposure <- fit$data[idx,num.f.names] } + + attr.factor <- exposure * coredata(factor.returns) specific.returns <- returns - apply(attr.factor,1,sum) attr <- cbind(returns,attr.factor,specific.returns) attr.list[[k]] <- xts(attr,as.Date(dates)) From noreply at r-forge.r-project.org Wed Aug 28 03:01:39 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 03:01:39 +0200 (CEST) Subject: [Returnanalytics-commits] r2913 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130828010139.8901F1850D1@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-28 03:01:39 +0200 (Wed, 28 Aug 2013) New Revision: 2913 Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R Log: Modifying testing_efficient_frontier to include print and summary methods. Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-28 00:16:36 UTC (rev 2912) +++ pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-28 01:01:39 UTC (rev 2913) @@ -37,6 +37,8 @@ # mean-var efficient frontier meanvar.ef <- create.EfficientFrontier(R=R, portfolio=meanvar.portf, type="mean-StdDev") +print(meanvar.ef) +summary(meanvar.ef) chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="b") chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", rf=0) chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="StdDev") @@ -49,10 +51,13 @@ chart.Weights.EF(opt_meanvar, match.col="StdDev") # or we can extract the efficient frontier and then plot it ef <- extractEfficientFrontier(object=opt_meanvar, match.col="StdDev", n.portfolios=15) +print(ef) chart.Weights.EF(ef, match.col="StdDev", colorset=bluemono) # mean-etl efficient frontier meanetl.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="mean-ES") +print(meanetl.ef) +summary(meanetl.ef) chart.EfficientFrontier(meanetl.ef, match.col="ES", main="mean-ETL Efficient Frontier", type="l", col="blue") chart.Weights.EF(meanetl.ef, colorset=bluemono, match.col="ES") @@ -92,6 +97,3 @@ chart.EfficientFrontierOverlay(R=R, portfolio_list=portf.list, type="mean-StdDev", match.col="StdDev", legend.loc="right", legend.labels=legend.labels) -# TODO add the following methods for objects of class efficient.frontier -# - print -# - summary From noreply at r-forge.r-project.org Wed Aug 28 06:32:45 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 06:32:45 +0200 (CEST) Subject: [Returnanalytics-commits] r2914 - pkg/PortfolioAnalytics/vignettes Message-ID: <20130828043245.3D5C71852C9@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-28 06:32:44 +0200 (Wed, 28 Aug 2013) New Revision: 2914 Added: pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw pkg/PortfolioAnalytics/vignettes/optimization-overview.pdf Log: Modifying optimization-overview vignette to use v2 specification. Added: pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw =================================================================== --- pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw (rev 0) +++ pkg/PortfolioAnalytics/vignettes/optimization-overview.Snw 2013-08-28 04:32:44 UTC (rev 2914) @@ -0,0 +1,390 @@ +\documentclass[a4paper]{article} +\usepackage[round]{natbib} +\usepackage{bm} +\usepackage{verbatim} +\usepackage[latin1]{inputenc} +% \VignetteIndexEntry{Portfolio Optimization with CVaR budgets in PortfolioAnalytics} +\bibliographystyle{abbrvnat} + +\usepackage{url} + +\let\proglang=\textsf +\newcommand{\pkg}[1]{{\fontseries{b}\selectfont #1}} +\newcommand{\R}[1]{{\fontseries{b}\selectfont #1}} +\newcommand{\email}[1]{\href{mailto:#1}{\normalfont\texttt{#1}}} +\newcommand{\E}{\mathsf{E}} +\newcommand{\VAR}{\mathsf{VAR}} +\newcommand{\COV}{\mathsf{COV}} +\newcommand{\Prob}{\mathsf{P}} + +\renewcommand{\topfraction}{0.85} +\renewcommand{\textfraction}{0.1} +\renewcommand{\baselinestretch}{1.5} +\setlength{\textwidth}{15cm} \setlength{\textheight}{22cm} \topmargin-1cm \evensidemargin0.5cm \oddsidemargin0.5cm + +\usepackage[latin1]{inputenc} +% or whatever + +\usepackage{lmodern} +\usepackage[T1]{fontenc} +% Or whatever. Note that the encoding and the font should match. If T1 +% does not look nice, try deleting the line with the fontenc. + +\begin{document} + +\title{Vignette: Portfolio Optimization with CVaR budgets\\ +in PortfolioAnalytics} +\author{Kris Boudt, Peter Carl and Brian Peterson } +\date{June 1, 2010} + +\maketitle +\tableofcontents + + +\bigskip + +\section{General information} + +Risk budgets are a central tool to estimate and manage the portfolio risk allocation. They decompose total portfolio risk into the risk contribution of each position. \citet{ BoudtCarlPeterson2010} propose several portfolio allocation strategies that use an appropriate transformation of the portfolio Conditional Value at Risk (CVaR) budget as an objective or constraint in the portfolio optimization problem. This document explains how risk allocation optimized portfolios can be obtained under general constraints in the \verb"PortfolioAnalytics" package of \citet{PortAnalytics}. + +\verb"PortfolioAnalytics" is designed to provide numerical solutions for portfolio problems with complex constraints and objective sets comprised of any R function. It can e.g.~construct portfolios that minimize a risk objective with (possibly non-linear) per-asset constraints on returns and drawdowns \citep{CarlPetersonBoudt2010}. The generality of possible constraints and objectives is a distinctive characteristic of the package with respect to RMetrics \verb"fPortfolio" of \citet{fPortfolioBook}. For standard Markowitz optimization problems, use of \verb"fPortfolio" rather than \verb"PortfolioAnalytics" is recommended. + +\verb"PortfolioAnalytics" solves the following type of problem +\begin{equation} \min_w g(w) \ \ s.t. \ \ +\left\{ \begin{array}{l} h_1(w)\leq 0 \\ \vdots \\ h_q(w)\leq 0. \end{array} \right. \label{optimproblem}\end{equation} \verb"PortfolioAnalytics" first merges the objective function and constraints into a penalty augmented objective function +\begin{equation} L(w) = g(w) + \mbox{penalty}\sum_{i=1}^q \lambda_i \max(h_i(w),0), \label{eq:constrainedobj} \end{equation} +where $\lambda_i$ is a multiplier to tune the relative importance of the constraints. The default values of penalty and $\lambda_i$ (called \verb"multiplier" in \verb"PortfolioAnalytics") are 10000 and 1, respectively. + +The minimum of this function is found through the \emph{Differential Evolution} (DE) algorithm of \citet{StornPrice1997} and ported to R by \citet{MullenArdiaGilWindoverCline2009}. DE is known for remarkable performance regarding continuous numerical problems \citep{PriceStornLampinen2006}. It has recently been advocated for optimizing portfolios under non-convex settings by \citet{Ardia2010} and \citet{Yollin2009}, among others. We use the R implementation of DE in the \verb"DEoptim" package of \citet{DEoptim}. + +The latest version of the \verb"PortfolioAnalytics" package can be downloaded from R-forge through the following command: +\begin{verbatim} +install.packages("PortfolioAnalytics", repos="http://R-Forge.R-project.org") +\end{verbatim} + +Its principal functions are: +\begin{itemize} +\item \verb"constraint(assets,min,max,min_sum,max_sum)": the portfolio optimization specification starts with specifying the shape of the weight vector through the function \verb"constraint". The weights have to be between \verb"min} and \verb"max" and their sum between \verb"min_sum" and \verb"max_sum". The first argument \verb"assets" is either a number indicating the number of portfolio assets or a vector holding the names of the assets. + +\item \verb"add.objective(constraints, type, name)": \verb"constraints" is a list holding the objective to be minimized and the constraints. New elements to this list are added by the function \verb"add.objective". Many common risk budget objectives and constraints are prespecified and can be identified by specifying the \verb"type" and \verb"name". + + +\item \verb"constrained_objective(w, R, constraints)": given the portfolio weight and return data, it evaluates the penalty augmented objective function in (\ref{eq:constrainedobj}). + +\item \verb"optimize.portfolio(R,constraints)": this function returns the portfolio weight that solves the problem in (\ref{optimproblem}). {\it R} is the multivariate return series of the portfolio components. + +\item \verb"optimize.portfolio.rebalancing(R,constraints,rebalance_on,trailing_periods": this function solves the multiperiod optimization problem. It returns for each rebalancing period the optimal weights and allows the estimation sample to be either from inception or a moving window. + +\end{itemize} + +Next we illustrate these functions on monthly return data for bond, US equity, international equity and commodity indices, which are the first 4 series +in the dataset \verb"indexes". The first step is to load the package \verb"PortfolioAnalytics" and the dataset. An important first note is that some of the functions (especially \verb" optimize.portfolio.rebalancing") requires the dataset to be a \verb"xts" object \citep{xts}. + + +<>= +options(width=80) +@ + +<>=| +library(PortfolioAnalytics) +#source("constrained_objective.R") +data(indexes) +class(indexes) +indexes <- indexes[,1:4] +head(indexes,2) +tail(indexes,2) +@ + +In what follows, we first illustrate the construction of the penalty augmented objective function. Then we present the code for solving the optimization problem. + +\section{Setting of the objective function} + +\subsection{Weight constraints} + +<>=| +# Wcons <- constraint( assets = colnames(indexes[,1:4]) ,min = rep(0,4), +# max=rep(1,4), min_sum=1,max_sum=1 ) +pspec <- portfolio.spec(assets=colnames(indexes[,1:4])) +pspec <- add.constraint(portfolio=pspec, type="leverage", min_sum=1, max_sum=1) +pspec <- add.constraint(portfolio=pspec, type="box", min=0, max=1) +@ + +Given the weight constraints, we can call the value of the function to be minimized. We consider the case of no violation and a case of violation. By default, \verb"normalize=TRUE" which means that if the sum of weights exceeds \verb"max_sum", the weight vector is normalized by multiplying it with \verb"sum(weights)/max_sum" such that the weights evaluated in the objective function satisfy the \verb"max_sum" constraint. +<>=| +# constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = Wcons) +# constrained_objective_v1( w = rep(1/3,4) , R = indexes[,1:4] , constraints = Wcons) +# constrained_objective_v1( w = rep(1/3,4) , R = indexes[,1:4] , constraints = Wcons, normalize=FALSE) +constrained_objective(w = rep(1/4, 4), R = indexes[, 1:4], portfolio = pspec) +constrained_objective(w = rep(1/3, 4), R = indexes[, 1:4], portfolio = pspec) +constrained_objective(w = rep(1/3, 4), R = indexes[, 1:4], portfolio = pspec, + normalize=FALSE) +@ + +The latter value can be recalculated as penalty times the weight violation, that is: $10000 \times 1/3.$ + +\subsection{Minimum CVaR objective function} + +Suppose now we want to find the portfolio that minimizes the 95\% portfolio CVaR subject to the weight constraints listed above. + +<>=| +# ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", +# arguments=list(p=0.95), enabled=TRUE) +pspec <- add.objective(portfolio = pspec, type = "risk", name = "CVaR", + arguments = list(p=0.95)) +@ + +The value of the objective function is: +<>=| +# constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , constraints = ObjSpec) +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4], portfolio = pspec) +@ +This is the CVaR of the equal-weight portfolio as computed by the function \verb"ES" in the \verb"PerformanceAnalytics" package of \citet{ Carl2007} +<>=| +library(PerformanceAnalytics) +out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95, portfolio_method="component") +out$MES +@ +All arguments in the function \verb"ES" can be passed on through \verb"arguments". E.g. to reduce the impact of extremes on the portfolio results, it is recommended to winsorize the data using the option clean="boudt". + +<>=| +out<-ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", + portfolio_method="component") +out$MES +@ + + +For the formulation of the objective function, this implies setting: +<>=| +# ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", +# arguments=list(p=0.95,clean="boudt"), enabled=TRUE) +pspec <- add.objective(portfolio = pspec, type = "risk", name = "CVaR", + arguments = list(p=0.95, clean="boudt"), indexnum=1) +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4], portfolio=pspec) +@ + +An additional argument that is not available for the moment in \verb"ES" is to estimate the conditional covariance matrix through +the constant conditional correlation model of \citet{Bollerslev90}. + +For the formulation of the objective function, this implies setting: +<>=| +# ObjSpec = add.objective_v1( constraints = Wcons , type="risk",name="CVaR", +# arguments=list(p=0.95,clean="boudt"), +# enabled=TRUE, garch=TRUE) +pspec <- add.objective(portfolio = pspec, type = "risk", name = "CVaR", + arguments = list(p=0.95, clean="boudt"), + indexnum=1, garch=TRUE) +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4], portfolio = pspec) +@ + +\subsection{Minimum CVaR concentration objective function} + +Add the minimum 95\% CVaR concentration objective to the objective function: +<>=| +# ObjSpec = add.objective_v1( constraints = Wcons , type="risk_budget_objective", +# name="CVaR", arguments=list(p=0.95,clean="boudt"), +# min_concentration=TRUE, enabled=TRUE) +pspec <- add.objective(portfolio=pspec, type="risk_budget_objective", + name="CVaR", arguments=list(p=0.95,clean="boudt"), + min_concentration=TRUE) +@ +The value of the objective function is: +<>=| +# constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , +# constraints = ObjSpec) +constrained_objective( w = rep(1/4,4) , R = indexes[,1:4] , portfolio = pspec) +@ +We can verify that this is effectively the largest CVaR contribution of that portfolio as follows: +<>=| +ES(indexes[,1:4],weights = rep(1/4,4),p=0.95,clean="boudt", + portfolio_method="component") +@ + +\subsection{Risk allocation constraints} + +We see that in the equal-weight portfolio, the international equities and commodities investment +cause more than 30\% of total risk. We could specify as a constraint that no asset can contribute +more than 30\% to total portfolio risk. This involves the construction of the following objective function: + +<>=| +# ObjSpec = add.objective_v1( constraints = Wcons , type="risk_budget_objective", +# name="CVaR", max_prisk = 0.3, +# arguments=list(p=0.95,clean="boudt"), enabled=TRUE) +# constrained_objective_v1( w = rep(1/4,4) , R = indexes[,1:4] , +# constraints = ObjSpec) +pspec = add.objective( portfolio = pspec , type="risk_budget_objective",name="CVaR", + max_prisk = 0.3, arguments=list(p=0.95,clean="boudt")) +constrained_objective( w = rep(1/4,4), R = indexes[,1:4], portfolio = pspec) +@ + +This value corresponds to the penalty parameter which has by default the value of 10000 times the exceedances: $ 10000*(0.045775103+0.054685023)\approx 1004.601.$ + +\section{Optimization} + +The penalty augmented objective function is minimized through Differential Evolution. Two parameters are crucial in tuning the optimization: \verb"search_size" and \verb"itermax". The optimization routine +\begin{enumerate} +\item First creates the initial generation of \verb"NP= search_size/itermax" guesses for the optimal value of the parameter vector, using the \verb"random_portfolios" function generating random weights satisfying the weight constraints. +\item Then DE evolves over this population of candidate solutions using alteration and selection operators in order to minimize the objective function. It restarts \verb"itermax" times. +\end{enumerate} It is important that \verb"search_size/itermax" is high enough. It is generally recommended that this ratio is at least ten times the length of the weight vector. For more details on the use of DE strategy in portfolio allocation, we refer the +reader to \citet{Ardia2010}. + +\subsection{Minimum CVaR portfolio under an upper 40\% CVaR allocation constraint} + +The functions needed to obtain the minimum CVaR portfolio under an upper 40\% CVaR allocation constraint are the following: +\begin{verbatim} +> ObjSpec <- constraint(assets = colnames(indexes[,1:4]),min = rep(0,4), ++ max=rep(1,4), min_sum=1,max_sum=1 ) +> ObjSpec <- add.objective_v1( constraints = ObjSpec, type="risk", ++ name="CVaR", arguments=list(p=0.95,clean="boudt"),enabled=TRUE) +> ObjSpec <- add.objective_v1( constraints = ObjSpec, ++ type="risk_budget_objective", name="CVaR", max_prisk = 0.4, ++ arguments=list(p=0.95,clean="boudt"), enabled=TRUE) +> set.seed(1234) +> out = optimize.portfolio_v1(R= indexes[,1:4],constraints=ObjSpec, ++ optimize_method="DEoptim",itermax=10, search_size=2000) +\end{verbatim} +After the call to these functions it starts to explore the feasible space iteratively: +\begin{verbatim} +Iteration: 1 bestvalit: 0.029506 bestmemit: 0.810000 0.126000 0.010000 0.140000 +Iteration: 2 bestvalit: 0.029506 bestmemit: 0.810000 0.126000 0.010000 0.140000 +Iteration: 3 bestvalit: 0.029272 bestmemit: 0.758560 0.079560 0.052800 0.112240 +Iteration: 4 bestvalit: 0.029272 bestmemit: 0.758560 0.079560 0.052800 0.112240 +Iteration: 5 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 +Iteration: 6 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 +Iteration: 7 bestvalit: 0.029019 bestmemit: 0.810000 0.108170 0.010000 0.140000 +Iteration: 8 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 +Iteration: 9 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 +Iteration: 10 bestvalit: 0.028874 bestmemit: 0.692069 0.028575 0.100400 0.071600 +elapsed time:1.85782111114926 +\end{verbatim} + +If \verb"TRACE=FALSE" the only output in \verb"out" is the weight vector that optimizes the objective function. + +\begin{verbatim} +> out[[1]] + US Bonds US Equities Int'l Equities Commodities + 0.77530240 0.03201150 0.11247491 0.08021119 \end{verbatim} + +If \verb"TRACE=TRUE" additional information is given such as the value of the objective function and the different constraints. + +\subsection{Minimum CVaR concentration portfolio} + +The functions needed to obtain the minimum CVaR concentration portfolio are the following: + +\begin{verbatim} +> ObjSpec <- constraint(assets = colnames(indexes[,1:4]) ,min = rep(0,4), ++ max=rep(1,4), min_sum=1,max_sum=1 ) +> ObjSpec <- add.objective_v1( constraints = ObjSpec, ++ type="risk_budget_objective", name="CVaR", ++ arguments=list(p=0.95,clean="boudt"), ++ min_concentration=TRUE,enabled=TRUE) +> set.seed(1234) +> out = optimize.portfolio_v1(R= indexes[,1:4],constraints=ObjSpec, ++ optimize_method="DEoptim",itermax=50, search_size=5000) +\end{verbatim} +The iterations are as follows: +\begin{verbatim} +Iteration: 1 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 +Iteration: 2 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 +Iteration: 3 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 +Iteration: 4 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 +Iteration: 5 bestvalit: 0.010598 bestmemit: 0.800000 0.100000 0.118000 0.030000 +Iteration: 45 bestvalit: 0.008209 bestmemit: 0.976061 0.151151 0.120500 0.133916 +Iteration: 46 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 +Iteration: 47 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 +Iteration: 48 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 +Iteration: 49 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 +Iteration: 50 bestvalit: 0.008170 bestmemit: 0.897703 0.141514 0.109601 0.124004 +elapsed time:4.1324522222413 +\end{verbatim} +This portfolio has the equal risk contribution characteristic: +\begin{verbatim} +> out[[1]] + US Bonds US Equities Int'l Equities Commodities + 0.70528537 0.11118139 0.08610905 0.09742419 +> ES(indexes[,1:4],weights = out[[1]],p=0.95,clean="boudt", ++ portfolio_method="component") +$MES + [,1] +[1,] 0.03246264 + +$contribution + US Bonds US Equities Int'l Equities Commodities + 0.008169565 0.008121930 0.008003228 0.008167917 + +$pct_contrib_MES + US Bonds US Equities Int'l Equities Commodities + 0.2516605 0.2501931 0.2465366 0.2516098 \end{verbatim} + + + + +\subsection{Dynamic optimization} + +Dynamic rebalancing of the risk budget optimized portfolio is possible through the function \verb"optimize.portfolio.rebalancing". Additional arguments are \verb"rebalance\_on} which indicates the rebalancing frequency (years, quarters, months). The estimation is either done from inception (\verb"trailing\_periods=0") or through moving window estimation, where each window has \verb"trailing_periods" observations. The minimum number of observations in the estimation sample is specified by \verb"training_period". Its default value is 36, which corresponds to three years for monthly data. + +As an example, consider the minimum CVaR concentration portfolio, with estimation from in inception and monthly rebalancing. Since we require a minimum estimation length of total number of observations -1, we can optimize the portfolio only for the last two months. + +\begin{verbatim} +> set.seed(1234) +> out = optimize.portfolio.rebalancing_v1(R= indexes,constraints=ObjSpec, rebalance_on ="months", ++ optimize_method="DEoptim",itermax=50, search_size=5000, training_period = nrow(indexes)-1 ) +\end{verbatim} + +For each of the optimization, the iterations are given as intermediate output: +\begin{verbatim} +Iteration: 1 bestvalit: 0.010655 bestmemit: 0.800000 0.100000 0.118000 0.030000 +Iteration: 2 bestvalit: 0.010655 bestmemit: 0.800000 0.100000 0.118000 0.030000 +Iteration: 49 bestvalit: 0.008207 bestmemit: 0.787525 0.124897 0.098001 0.108258 +Iteration: 50 bestvalit: 0.008195 bestmemit: 0.774088 0.122219 0.095973 0.104338 +elapsed time:4.20546416666773 +Iteration: 1 bestvalit: 0.011006 bestmemit: 0.770000 0.050000 0.090000 0.090000 +Iteration: 2 bestvalit: 0.010559 bestmemit: 0.498333 0.010000 0.070000 0.080000 +Iteration: 49 bestvalit: 0.008267 bestmemit: 0.828663 0.126173 0.100836 0.114794 +Iteration: 50 bestvalit: 0.008267 bestmemit: 0.828663 0.126173 0.100836 0.114794 +elapsed time:4.1060591666566 +overall elapsed time:8.31152777777778 +\end{verbatim} +The output is a list holding for each rebalancing period the output of the optimization, such as portfolio weights. +\begin{verbatim} +> out[[1]]$weights + US Bonds US Equities Int'l Equities Commodities + 0.70588695 0.11145087 0.08751686 0.09514531 +> out[[2]]$weights + US Bonds US Equities Int'l Equities Commodities + 0.70797640 0.10779728 0.08615059 0.09807574 +\end{verbatim} +But also the value of the objective function: +\begin{verbatim} +> out[[1]]$out +[1] 0.008195072 +> out[[2]]$out +[1] 0.008266844 +\end{verbatim} +The first and last observation from the estimation sample: +\begin{verbatim} +> out[[1]]$data_summary +$first + US Bonds US Equities Int'l Equities Commodities +1980-01-31 -0.0272 0.061 0.0462 0.0568 + +$last + US Bonds US Equities Int'l Equities Commodities +2009-11-30 0.0134 0.0566 0.0199 0.015 + +> out[[2]]$data_summary +$first + US Bonds US Equities Int'l Equities Commodities +1980-01-31 -0.0272 0.061 0.0462 0.0568 + +$last + US Bonds US Equities Int'l Equities Commodities +2009-12-31 -0.0175 0.0189 0.0143 0.0086 +\end{verbatim} + +Of course, DE is a stochastic optimizaer and typically will only find a near-optimal solution that depends on the seed. The function \verb"optimize.portfolio.parallel" in \verb"PortfolioAnalytics" allows to run an arbitrary number of portfolio sets in parallel in order to develop "confidence bands" around your solution. It is based on Revolution's \verb"foreach" package \citep{foreach}. + +\bibliography{PA} + + +\end{document} + Added: pkg/PortfolioAnalytics/vignettes/optimization-overview.pdf =================================================================== (Binary files differ) Property changes on: pkg/PortfolioAnalytics/vignettes/optimization-overview.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream From noreply at r-forge.r-project.org Wed Aug 28 07:08:12 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 07:08:12 +0200 (CEST) Subject: [Returnanalytics-commits] r2915 - pkg/PortfolioAnalytics/R Message-ID: <20130828050812.2AE0518599B@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-28 07:08:11 +0200 (Wed, 28 Aug 2013) New Revision: 2915 Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R Log: Using foreach and dopar to create mean-var and mean-etl efficient frontiers. Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-28 04:32:44 UTC (rev 2914) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-28 05:08:11 UTC (rev 2915) @@ -164,9 +164,14 @@ out <- matrix(0, nrow=length(ret_seq), ncol=length(extractStats(tmp))) - for(i in 1:length(ret_seq)){ +# for(i in 1:length(ret_seq)){ +# portfolio$objectives[[mean_idx]]$target <- ret_seq[i] +# out[i, ] <- extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) +# } + stopifnot("package:foreach" %in% search() || require("foreach",quietly = TRUE)) + out <- foreach(i=1:length(ret_seq), .inorder=TRUE, .combine=rbind, .errorhandling='remove') %dopar% { portfolio$objectives[[mean_idx]]$target <- ret_seq[i] - out[i, ] <- extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) + extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) } colnames(out) <- names(stats) return(structure(out, class="efficient.frontier")) @@ -243,9 +248,14 @@ out <- matrix(0, nrow=length(ret_seq), ncol=length(extractStats(tmp))) - for(i in 1:length(ret_seq)){ +# for(i in 1:length(ret_seq)){ +# portfolio$objectives[[mean_idx]]$target <- ret_seq[i] +# out[i, ] <- extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) +# } + stopifnot("package:foreach" %in% search() || require("foreach",quietly = TRUE)) + out <- foreach(i=1:length(ret_seq), .inorder=TRUE, .combine=rbind, .errorhandling='remove') %dopar% { portfolio$objectives[[mean_idx]]$target <- ret_seq[i] - out[i, ] <- extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) + extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) } colnames(out) <- names(stats) return(structure(out, class="efficient.frontier")) From noreply at r-forge.r-project.org Wed Aug 28 10:55:04 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 10:55:04 +0200 (CEST) Subject: [Returnanalytics-commits] r2916 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . R man Message-ID: <20130828085504.B2085185C16@r-forge.r-project.org> Author: shubhanm Date: 2013-08-28 10:55:04 +0200 (Wed, 28 Aug 2013) New Revision: 2916 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/se.LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/se.LoSharpe.Rd Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd Log: / standard error LoSharpe Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-28 05:08:11 UTC (rev 2915) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-28 08:55:04 UTC (rev 2916) @@ -1,38 +1,39 @@ -Package: noniid.sm -Type: Package -Title: Non-i.i.d. GSoC 2013 Shubhankit -Version: 0.1 -Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ -Author: Shubhankit Mohan -Contributors: Peter Carl, Brian G. Peterson -Depends: - xts, - PerformanceAnalytics -Suggests: - PortfolioAnalytics -Maintainer: Brian G. Peterson -Description: GSoC 2013 project to replicate literature on drawdowns and - non-i.i.d assumptions in finance. -License: GPL-3 -ByteCompile: TRUE -Collate: - 'ACStdDev.annualized.R' - 'CalmarRatio.Normalized.R' - 'CDDopt.R' - 'CDrawdown.R' - 'chart.Autocorrelation.R' - 'EmaxDDGBM.R' - 'GLMSmoothIndex.R' - 'maxDDGBM.R' - 'na.skip.R' - 'Return.GLM.R' - 'table.ComparitiveReturn.GLM.R' - 'table.normDD.R' - 'table.UnsmoothReturn.R' - 'UnsmoothReturn.R' - 'AcarSim.R' - 'CDD.Opt.R' - 'CalmarRatio.Norm.R' - 'SterlingRatio.Norm.R' - 'LoSharpe.R' - 'Return.Okunev.R' +Package: noniid.sm +Type: Package +Title: Non-i.i.d. GSoC 2013 Shubhankit +Version: 0.1 +Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ +Author: Shubhankit Mohan +Contributors: Peter Carl, Brian G. Peterson +Depends: + xts, + PerformanceAnalytics +Suggests: + PortfolioAnalytics +Maintainer: Brian G. Peterson +Description: GSoC 2013 project to replicate literature on drawdowns and + non-i.i.d assumptions in finance. +License: GPL-3 +ByteCompile: TRUE +Collate: + 'ACStdDev.annualized.R' + 'CalmarRatio.Normalized.R' + 'CDDopt.R' + 'CDrawdown.R' + 'chart.Autocorrelation.R' + 'EmaxDDGBM.R' + 'GLMSmoothIndex.R' + 'maxDDGBM.R' + 'na.skip.R' + 'Return.GLM.R' + 'table.ComparitiveReturn.GLM.R' + 'table.normDD.R' + 'table.UnsmoothReturn.R' + 'UnsmoothReturn.R' + 'AcarSim.R' + 'CDD.Opt.R' + 'CalmarRatio.Norm.R' + 'SterlingRatio.Norm.R' + 'LoSharpe.R' + 'Return.Okunev.R' + 'se.LoSharpe.R' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-28 05:08:11 UTC (rev 2915) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-28 08:55:04 UTC (rev 2916) @@ -12,6 +12,7 @@ export(QP.Norm) export(Return.GLM) export(Return.Okunev) +export(se.LoSharpe) export(SterlingRatio.Norm) export(SterlingRatio.Normalized) export(table.ComparitiveReturn.GLM) Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/se.LoSharpe.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/se.LoSharpe.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/se.LoSharpe.R 2013-08-28 08:55:04 UTC (rev 2916) @@ -0,0 +1,92 @@ +#'@title Andrew Lo Sharpe Ratio Statistics +#'@description +#' Although the Sharpe ratio has become part of the canon of modern financial +#' analysis, its applications typically do not account for the fact that it is an +#' estimated quantity, subject to estimation errors which can be substantial in +#' some cases. +#' +#' Many studies have documented various violations of the assumption of +#' IID returns for financial securities. +#' +#' Under the assumption of stationarity,a version of the Central Limit Theorem can +#' still be applied to the estimator . +#' @details +#' The relationship between SR and SR(q) is somewhat more involved for non- +#'IID returns because the variance of Rt(q) is not just the sum of the variances of component returns but also includes all the covariances. Specifically, under +#' the assumption that returns \eqn{R_t} are stationary, +#' \deqn{ Var[(R_t)] = \sum \sum Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum (q-k)\rho(k) } +#' Where \eqn{ \rho(k) = Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order autocorrelation coefficient of the series of returns.This yields the following relationship between SR and SR(q): +#' and i,j belongs to 0 to q-1 +#'\deqn{SR(q) = \eta(q) } +#'Where : +#' \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 \sum(q-k)\rho(k)] } +#' Where k belongs to 0 to q-1 +#' @param Ra an xts, vector, matrix, data frame, timeSeries or zoo object of +#' daily asset returns +#' @param Rf an xts, vector, matrix, data frame, timeSeries or zoo object of +#' annualized Risk Free Rate +#' @param q Number of autocorrelated lag periods. Taken as 3 (Default) +#' @param \dots any other pass thru parameters +#' @author Brian G. Peterson, Peter Carl, Shubhankit Mohan +#' @references Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ An Econometric Model of Serial Correlation and Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT Sloan Working Paper No. 4288-03; MIT Laboratory for Financial Engineering Working Paper No. LFE-1041A-03; EFMA 2003 Helsinki Meetings. +#'\code{\link[stats]{}} \cr +#' \url{http://ssrn.com/abstract=384700} +#' @keywords ts multivariate distribution models non-iid +#' @examples +#' +#' data(edhec) +#' head(se.LoSharpe(edhec,0,3) +#' @rdname se.LoSharpe +#' @export +se.LoSharpe <- + function (Ra,Rf = 0,q = 3, ...) + { # @author Brian G. Peterson, Peter Carl + + + # Function: + R = checkData(Ra, method="xts") + # Get dimensions and labels + columns.a = ncol(R) + columnnames.a = colnames(R) + # Time used for daily Return manipulations + Time= 252*nyears(edhec) + clean.lo <- function(column.R,q) { + # compute the lagged return series + gamma.k =matrix(0,q) + mu = sum(column.R)/(Time) + Rf= Rf/(Time) + for(i in 1:q){ + lagR = lag(column.R, k=i) + # compute the Momentum Lagged Values + gamma.k[i]= (sum(((column.R-mu)*(lagR-mu)),na.rm=TRUE)) + } + return(gamma.k) + } + neta.lo <- function(pho.k,q) { + # compute the lagged return series + sumq = 0 + for(j in 1:q){ + sumq = sumq+ (q-j)*pho.k[j] + } + return(q/(sqrt(q+2*sumq))) + } + for(column.a in 1:columns.a) { # for each asset passed in as R + # clean the data and get rid of NAs + mu = sum(R[,column.a])/(Time) + sig=sqrt(((R[,column.a]-mu)^2/(Time))) + pho.k = clean.lo(R[,column.a],q)/(as.numeric(sig[1])) + netaq=neta.lo(pho.k,q) + column.lo = (netaq*((mu-Rf)/as.numeric(sig[1]))) + column.lo= 1.96*sqrt((1+(column.lo*column.lo/2))/(Time)) + if(column.a == 1) { lo = column.lo } + else { lo = cbind (lo, column.lo) } + + } + colnames(lo) = columnnames.a + rownames(lo)= paste("Standard Error of Sharpe Ratio Estimates(95% Confidence)") + return(lo) + + + # RESULTS: + + } Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd 2013-08-28 05:08:11 UTC (rev 2915) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/LoSharpe.Rd 2013-08-28 08:55:04 UTC (rev 2916) @@ -1,70 +1,70 @@ -\name{LoSharpe} -\alias{LoSharpe} -\title{Andrew Lo Sharpe Ratio} -\usage{ - LoSharpe(Ra, Rf = 0, q = 3, ...) -} -\arguments{ - \item{Ra}{an xts, vector, matrix, data frame, timeSeries - or zoo object of daily asset returns} - - \item{Rf}{an xts, vector, matrix, data frame, timeSeries - or zoo object of annualized Risk Free Rate} - - \item{q}{Number of autocorrelated lag periods. Taken as 3 - (Default)} - - \item{\dots}{any other pass thru parameters} -} -\description{ - Although the Sharpe ratio has become part of the canon of - modern financial analysis, its applications typically do - not account for the fact that it is an estimated - quantity, subject to estimation errors that can be - substantial in some cases. - - Many studies have documented various violations of the - assumption of IID returns for financial securities. - - Under the assumption of stationarity,a version of the - Central Limit Theorem can still be applied to the - estimator . -} -\details{ - The relationship between SR and SR(q) is somewhat more - involved for non- IID returns because the variance of - Rt(q) is not just the sum of the variances of component - returns but also includes all the covariances. - Specifically, under the assumption that returns \eqn{R_t} - are stationary, \deqn{ Var[(R_t)] = \sum \sum - Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum - (q-k)\rho(k) } Where \eqn{ \rho(k) = - Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order - autocorrelation coefficient of the series of returns.This - yields the following relationship between SR and SR(q): - and i,j belongs to 0 to q-1 \deqn{SR(q) = \eta(q) } Where - : \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 - \sum(q-k)\rho(k)] } Where k belongs to 0 to q-1 -} -\examples{ -data(edhec) -head(LoSharpe(edhec,0,3) -} -\author{ - Brian G. Peterson, Peter Carl, Shubhankit Mohan -} -\references{ - Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ - An Econometric Model of Serial Correlation and - Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT - Sloan Working Paper No. 4288-03; MIT Laboratory for - Financial Engineering Working Paper No. LFE-1041A-03; - EFMA 2003 Helsinki Meetings. \code{\link[stats]{}} \cr - \url{http://ssrn.com/abstract=384700} -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{non-iid} -\keyword{ts} - +\name{LoSharpe} +\alias{LoSharpe} +\title{Andrew Lo Sharpe Ratio} +\usage{ + LoSharpe(Ra, Rf = 0, q = 3, ...) +} +\arguments{ + \item{Ra}{an xts, vector, matrix, data frame, timeSeries + or zoo object of daily asset returns} + + \item{Rf}{an xts, vector, matrix, data frame, timeSeries + or zoo object of annualized Risk Free Rate} + + \item{q}{Number of autocorrelated lag periods. Taken as 3 + (Default)} + + \item{\dots}{any other pass thru parameters} +} +\description{ + Although the Sharpe ratio has become part of the canon of + modern financial analysis, its applications typically do + not account for the fact that it is an estimated + quantity, subject to estimation errors that can be + substantial in some cases. + + Many studies have documented various violations of the + assumption of IID returns for financial securities. + + Under the assumption of stationarity,a version of the + Central Limit Theorem can still be applied to the + estimator . +} +\details{ + The relationship between SR and SR(q) is somewhat more + involved for non- IID returns because the variance of + Rt(q) is not just the sum of the variances of component + returns but also includes all the covariances. + Specifically, under the assumption that returns \eqn{R_t} + are stationary, \deqn{ Var[(R_t)] = \sum \sum + Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum + (q-k)\rho(k) } Where \eqn{ \rho(k) = + Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order + autocorrelation coefficient of the series of returns.This + yields the following relationship between SR and SR(q): + and i,j belongs to 0 to q-1 \deqn{SR(q) = \eta(q) } Where + : \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 + \sum(q-k)\rho(k)] } Where k belongs to 0 to q-1 +} +\examples{ +data(edhec) +head(LoSharpe(edhec,0,3) +} +\author{ + Brian G. Peterson, Peter Carl, Shubhankit Mohan +} +\references{ + Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ + An Econometric Model of Serial Correlation and + Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT + Sloan Working Paper No. 4288-03; MIT Laboratory for + Financial Engineering Working Paper No. LFE-1041A-03; + EFMA 2003 Helsinki Meetings. \code{\link[stats]{}} \cr + \url{http://ssrn.com/abstract=384700} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{non-iid} +\keyword{ts} + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/se.LoSharpe.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/se.LoSharpe.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/se.LoSharpe.Rd 2013-08-28 08:55:04 UTC (rev 2916) @@ -0,0 +1,70 @@ +\name{se.LoSharpe} +\alias{se.LoSharpe} +\title{Andrew Lo Sharpe Ratio Statistics} +\usage{ + se.LoSharpe(Ra, Rf = 0, q = 3, ...) +} +\arguments{ + \item{Ra}{an xts, vector, matrix, data frame, timeSeries + or zoo object of daily asset returns} + + \item{Rf}{an xts, vector, matrix, data frame, timeSeries + or zoo object of annualized Risk Free Rate} + + \item{q}{Number of autocorrelated lag periods. Taken as 3 + (Default)} + + \item{\dots}{any other pass thru parameters} +} +\description{ + Although the Sharpe ratio has become part of the canon of + modern financial analysis, its applications typically do + not account for the fact that it is an estimated + quantity, subject to estimation errors which can be + substantial in some cases. + + Many studies have documented various violations of the + assumption of IID returns for financial securities. + + Under the assumption of stationarity,a version of the + Central Limit Theorem can still be applied to the + estimator . +} +\details{ + The relationship between SR and SR(q) is somewhat more + involved for non- IID returns because the variance of + Rt(q) is not just the sum of the variances of component + returns but also includes all the covariances. + Specifically, under the assumption that returns \eqn{R_t} + are stationary, \deqn{ Var[(R_t)] = \sum \sum + Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum + (q-k)\rho(k) } Where \eqn{ \rho(k) = + Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order + autocorrelation coefficient of the series of returns.This + yields the following relationship between SR and SR(q): + and i,j belongs to 0 to q-1 \deqn{SR(q) = \eta(q) } Where + : \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 + \sum(q-k)\rho(k)] } Where k belongs to 0 to q-1 +} +\examples{ +data(edhec) +head(se.LoSharpe(edhec,0,3) +} +\author{ + Brian G. Peterson, Peter Carl, Shubhankit Mohan +} +\references{ + Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ + An Econometric Model of Serial Correlation and + Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT + Sloan Working Paper No. 4288-03; MIT Laboratory for + Financial Engineering Working Paper No. LFE-1041A-03; + EFMA 2003 Helsinki Meetings. \code{\link[stats]{}} \cr + \url{http://ssrn.com/abstract=384700} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{non-iid} +\keyword{ts} + From noreply at r-forge.r-project.org Wed Aug 28 12:38:55 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 12:38:55 +0200 (CEST) Subject: [Returnanalytics-commits] r2917 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . R man Message-ID: <20130828103855.BAC641845BC@r-forge.r-project.org> Author: shubhanm Date: 2013-08-28 12:38:55 +0200 (Wed, 28 Aug 2013) New Revision: 2917 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.AcarSim.Rd Removed: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.normDD.Rd Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd Log: .Rd/R addition for Shane Acar Loss Simulation chart wrapper Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-28 08:55:04 UTC (rev 2916) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/DESCRIPTION 2013-08-28 10:38:55 UTC (rev 2917) @@ -17,7 +17,6 @@ ByteCompile: TRUE Collate: 'ACStdDev.annualized.R' - 'CalmarRatio.Normalized.R' 'CDDopt.R' 'CDrawdown.R' 'chart.Autocorrelation.R' @@ -27,7 +26,6 @@ 'na.skip.R' 'Return.GLM.R' 'table.ComparitiveReturn.GLM.R' - 'table.normDD.R' 'table.UnsmoothReturn.R' 'UnsmoothReturn.R' 'AcarSim.R' @@ -37,3 +35,4 @@ 'LoSharpe.R' 'Return.Okunev.R' 'se.LoSharpe.R' + 'chart.AcarSim.R' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-28 08:55:04 UTC (rev 2916) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/NAMESPACE 2013-08-28 10:38:55 UTC (rev 2917) @@ -1,21 +1,18 @@ export(AcarSim) export(ACStdDev.annualized) export(CalmarRatio.Norm) -export(CalmarRatio.Normalized) export(CDD.Opt) export(CDDOpt) export(CDrawdown) +export(chart.AcarSim) export(chart.Autocorrelation) export(EMaxDDGBM) export(GLMSmoothIndex) export(LoSharpe) -export(QP.Norm) export(Return.GLM) export(Return.Okunev) export(se.LoSharpe) export(SterlingRatio.Norm) -export(SterlingRatio.Normalized) export(table.ComparitiveReturn.GLM) export(table.EMaxDDGBM) -export(table.NormDD) export(table.UnsmoothReturn) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R 2013-08-28 08:55:04 UTC (rev 2916) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/AcarSim.R 2013-08-28 10:38:55 UTC (rev 2917) @@ -7,12 +7,12 @@ #' \emph{two to two} by step of \emph{0.1} . The process has been repeated \bold{six thousand times}. #' @details Unfortunately, there is no \bold{analytical formulae} to establish the maximum drawdown properties under #' the random walk assumption. We should note first that due to its definition, the maximum drawdown -#' divided by volatility is an only function of the ratio mean divided by volatility. +#' divided by volatility can be interpreted as the only function of the ratio mean divided by volatility. #' \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma = F(\mu/\sigma)} #' Where j varies from 1 to n ,which is the number of drawdown's in simulation #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns -#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @author Shubhankit Mohan #' @references Maximum Loss and Maximum Drawdown in Financial Markets,\emph{International Conference Sponsored by BNP and Imperial College on: #' Forecasting Financial Markets, London, United Kingdom, May 1997} \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} #' @keywords Maximum Loss Simulated Drawdown @@ -22,7 +22,7 @@ #' @rdname AcarSim #' @export AcarSim <- - function(R) + function() { R = checkData(Ra, method="xts") # Get dimensions and labels Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R 2013-08-28 08:55:04 UTC (rev 2916) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/CalmarRatio.Normalized.R 2013-08-28 10:38:55 UTC (rev 2917) @@ -1,139 +0,0 @@ -#' QP function fo calculation of Sharpe Ratio -#' -#' calculate a Normalized Calmar or Sterling reward/risk ratio -#' -#' Normalized Calmar and Sterling Ratios are yet another method of creating a -#' risk-adjusted measure for ranking investments similar to the -#' \code{\link{SharpeRatio}}. -#' -#' Both the Normalized Calmar and the Sterling ratio are the ratio of annualized return -#' over the absolute value of the maximum drawdown of an investment. The -#' Sterling ratio adds an excess risk measure to the maximum drawdown, -#' traditionally and defaulting to 10\%. -#' -#' It is also traditional to use a three year return series for these -#' calculations, although the functions included here make no effort to -#' determine the length of your series. If you want to use a subset of your -#' series, you'll need to truncate or subset the input data to the desired -#' length. -#' -#' Many other measures have been proposed to do similar reward to risk ranking. -#' It is the opinion of this author that newer measures such as Sortino's -#' \code{\link{UpsidePotentialRatio}} or Favre's modified -#' \code{\link{SharpeRatio}} are both \dQuote{better} measures, and -#' should be preferred to the Calmar or Sterling Ratio. -#' -#' @aliases Normalized.CalmarRatio Normalized.SterlingRatio -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of -#' asset returns -#' @param scale number of periods in a year (daily scale = 252, monthly scale = -#' 12, quarterly scale = 4) -#' @param excess for Sterling Ratio, excess amount to add to the max drawdown, -#' traditionally and default .1 (10\%) -#' @author Brian G. Peterson -#' @seealso -#' \code{\link{Return.annualized}}, \cr -#' \code{\link{maxDrawdown}}, \cr -#' \code{\link{SharpeRatio.modified}}, \cr -#' \code{\link{UpsidePotentialRatio}} -#' @references Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct 2004. -#' @keywords ts multivariate distribution models -#' @examples -#' -#' data(managers) -#' Normalized.CalmarRatio(managers[,1,drop=FALSE]) -#' Normalized.CalmarRatio(managers[,1:6]) -#' Normalized.SterlingRatio(managers[,1,drop=FALSE]) -#' Normalized.SterlingRatio(managers[,1:6]) -#' -#' @export -#' @rdname CalmarRatio.normalized -QP.Norm <- function (R, tau,scale = NA) -{ - Sharpe= as.numeric(SharpeRatio.annualized(edhec)) -return(.63519+(.5*log(tau))+log(Sharpe)) -} - -#' @export -CalmarRatio.Normalized <- function (R, tau = 1,scale = NA) -{ # @author Brian G. Peterson - - # DESCRIPTION: - # Inputs: - # Ra: in this case, the function anticipates having a return stream as input, - # rather than prices. - # tau : scaled Time in Years - # scale: number of periods per year - # Outputs: - # This function returns a Calmar Ratio - - # FUNCTION: - - R = checkData(R) - if(is.na(scale)) { - freq = periodicity(R) - switch(freq$scale, - minute = {stop("Data periodicity too high")}, - hourly = {stop("Data periodicity too high")}, - daily = {scale = 252}, - weekly = {scale = 52}, - monthly = {scale = 12}, - quarterly = {scale = 4}, - yearly = {scale = 1} - ) - } - Time = nyears(R) - annualized_return = Return.annualized(R, scale=scale) - drawdown = abs(maxDrawdown(R)) - result = (annualized_return/drawdown)*(QP.Norm(R,Time)/QP.Norm(R,tau))*(tau/Time) - rownames(result) = "Normalized Calmar Ratio" - return(result) -} - -#' @export -#' @rdname CalmarRatio.normalized -SterlingRatio.Normalized <- - function (R, tau=1,scale=NA, excess=.1) - { # @author Brian G. Peterson - - # DESCRIPTION: - # Inputs: - # Ra: in this case, the function anticipates having a return stream as input, - # rather than prices. - # scale: number of periods per year - # Outputs: - # This function returns a Sterling Ratio - - # FUNCTION: - Time = nyears(R) - R = checkData(R) - if(is.na(scale)) { - freq = periodicity(R) - switch(freq$scale, - minute = {stop("Data periodicity too high")}, - hourly = {stop("Data periodicity too high")}, - daily = {scale = 252}, - weekly = {scale = 52}, - monthly = {scale = 12}, - quarterly = {scale = 4}, - yearly = {scale = 1} - ) - } - annualized_return = Return.annualized(R, scale=scale) - drawdown = abs(maxDrawdown(R)+excess) - result = annualized_return/drawdown*(QP.Norm(R,Time)/QP.Norm(R,tau))*(tau/Time) - rownames(result) = paste("Normalized Sterling Ratio (Excess = ", round(excess*100,0), "%)", sep="") - return(result) - } - -############################################################################### -# R (http://r-project.org/) Econometrics for Performance and Risk Analysis -# -# Copyright (c) 2004-2013 Peter Carl and Brian G. Peterson -# -# This R package is distributed under the terms of the GNU Public License (GPL) -# for full details see the file COPYING -# -# $Id: CalmarRatioNormalized.R 1955 2012-05-23 16:38:16Z braverock $ -# -############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.AcarSim.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/chart.AcarSim.R 2013-08-28 10:38:55 UTC (rev 2917) @@ -0,0 +1,94 @@ +#' @title Acar-Shane Maximum Loss Plot +#' +#'@description To get some insight on the relationships between maximum drawdown per unit of volatility +#'and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. +#' We have simulated cash flows over a period of 36 monthly returns and measured maximum +#'drawdown for varied levels of annualised return divided by volatility varying from minus +#' \emph{two to two} by step of \emph{0.1} . The process has been repeated \bold{six thousand times}. +#' @details Unfortunately, there is no \bold{analytical formulae} to establish the maximum drawdown properties under +#' the random walk assumption. We should note first that due to its definition, the maximum drawdown +#' divided by volatility is an only function of the ratio mean divided by volatility. +#' \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma = F(\mu/\sigma)} +#' Where j varies from 1 to n ,which is the number of drawdown's in simulation +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @author Shubhankit Mohan +#' @references Maximum Loss and Maximum Drawdown in Financial Markets,\emph{International Conference Sponsored by BNP and Imperial College on: +#' Forecasting Financial Markets, London, United Kingdom, May 1997} \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} +#' @keywords Maximum Loss Simulated Drawdown +#' @examples +#' library(PerformanceAnalytics) +#' chart.AcarSim(edhec) +#' @rdname chart.AcarSim +#' @export +chart.AcarSim <- + function(R) + { + R = checkData(Ra, method="xts") + # Get dimensions and labels + # simulated parameters using edhec data + mu=mean(Return.annualized(edhec)) + monthly=(1+mu)^(1/12)-1 + sig=StdDev.annualized(edhec[,1])[1]; + T= 36 + j=1 + dt=1/T + nsim=6000; + thres=4; + r=matrix(0,nsim,T+1) + monthly = 0 + r[,1]=monthly; + # Sigma 'monthly volatiltiy' will be the varying term + ratio= seq(-2, 2, by=.1); + len = length(ratio) + ddown=array(0, dim=c(nsim,len,thres)) + fddown=array(0, dim=c(len,thres)) + Z <- array(0, c(len)) + for(i in 1:len) + { + monthly = sig*ratio[i]; + + for(j in 1:nsim) + { + dz=rnorm(T) + + # 3 factor due to 36 month time frame investigated in the paper + r[j,2:37]=monthly+(sig*dz*sqrt(3*dt)) + + ddown[j,i,1]= ES((r[j,]),.99) + ddown[j,i,1][is.na(ddown[j,i,1])] <- 0 + fddown[i,1]=fddown[i,1]+ddown[j,i,1] + ddown[j,i,2]= ES((r[j,]),.95) + ddown[j,i,2][is.na(ddown[j,i,2])] <- 0 + fddown[i,2]=fddown[i,2]+ddown[j,i,2] + ddown[j,i,3]= ES((r[j,]),.90) + ddown[j,i,3][is.na(ddown[j,i,3])] <- 0 + fddown[i,3]=fddown[i,3]+ddown[j,i,3] + ddown[j,i,4]= ES((r[j,]),.85) + ddown[j,i,4][is.na(ddown[j,i,4])] <- 0 + fddown[i,4]=fddown[i,4]+ddown[j,i,4] + assign("last.warning", NULL, envir = baseenv()) + } + } + plot(((fddown[,1])/(sig*nsim)),xlab="Annualised Return/Volatility from [-2,2]",ylab="Maximum Drawdown/Volatility",type='o',col="blue") + lines(((fddown[,2])/(sig*nsim)),type='o',col="pink") + lines(((fddown[,3])/(sig*nsim)),type='o',col="green") + lines(((fddown[,4])/(sig*nsim)),type='o',col="red") + legend(32,-4, c("%99", "%95", "%90","%85"), col = c("blue","pink","green","red"), text.col= "black", + lty = c(2, -1, 1), pch = c(-1, 3, 4), merge = TRUE, bg='gray90') + + title("Maximum Drawdown/Volatility as a function of Return/Volatility +36 monthly returns simulated 6,000 times") + } + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: AcarSim.R 2163 2012-07-16 00:30:19Z braverock $ +# +############################################################################### \ No newline at end of file Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R 2013-08-28 08:55:04 UTC (rev 2916) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/table.normDD.R 2013-08-28 10:38:55 UTC (rev 2917) @@ -1,109 +0,0 @@ -#'@title Generalised Lambda Distribution Simulated Drawdown -#'@description When selecting a hedge fund manager, one risk measure investors often -#' consider is drawdown. How should drawdown distributions look? Carr Futures' -#' Galen Burghardt, Ryan Duncan and Lianyan Liu share some insights from their -#'research to show investors how to begin to answer this tricky question -#'@details To simulate net asset value (NAV) series where skewness and kurtosis are zero, -#' we draw sample returns from a lognormal return distribution. To capture skewness -#' and kurtosis, we sample returns from a \bold{generalised \eqn{\lambda} distribution}.The values of -#' skewness and excess kurtosis used were roughly consistent with the range of values the paper -#' observed for commodity trading advisers in our database. The NAV series is constructed -#' from the return series. The simulated drawdowns are then derived and used to produce -#' the theoretical drawdown distributions. A typical run usually requires \bold{10,000} -#' iterations to produce a smooth distribution. -#' -#' -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of -#' asset returns -#' @references Burghardt, G., and L. Liu, \emph{ It's the Autocorrelation, Stupid (November 2012) Newedge -#' working paper.} -#' \code{\link[stats]{}} \cr -#' \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} -#' Burghardt, G., Duncan, R. and L. Liu, \eph{Deciphering drawdown}. Risk magazine, Risk management for investors, September, S16-S20, 2003. \url{http://www.risk.net/data/risk/pdf/investor/0903_risk.pdf} -#' @author Peter Carl, Brian Peterson, Shubhankit Mohan -#' @keywords Simulated Drawdown Using Brownian Motion Assumptions -#' @seealso Drawdowns.R -#' @rdname table.normDD -#' @export -table.NormDD <- - function (R,digits =4) - {# @author - - # DESCRIPTION: - # Downside Risk Summary: Statistics and Stylized Facts - - # Inputs: - # R: a regular timeseries of returns (rather than prices) - # Output: Table of Estimated Drawdowns - require("gld") - - y = checkData(R, method = "xts") - columns = ncol(y) - rows = nrow(y) - columnnames = colnames(y) - rownames = rownames(y) - T= nyears(y); - n <- 1000 - dt <- 1/T; - r0 <- 0; - s0 <- 1; - # for each column, do the following: - for(column in 1:columns) { - x = y[,column] - mu = Return.annualized(x, scale = NA, geometric = TRUE) - sig=StdDev.annualized(x) - skew = skewness(x) - kurt = kurtosis(x) - r <- matrix(0,T+1,n) # matrix to hold short rate paths - s <- matrix(0,T+1,n) - r[1,] <- r0 - s[1,] <- s0 - drawdown <- matrix(0,n) - # return(Ed) - - for(j in 1:n){ - r[2:(T+1),j]= rgl(T,mu,sig,skew,kurt) - for(i in 2:(T+1)){ - - dr <- r[i,j]*dt - s[i,j] <- s[i-1,j] + (dr/100) - } - - - drawdown[j] = as.numeric(maxdrawdown(s[,j])[1]) - } - z = c((mu*100), - (sig*100), - ((mean(drawdown)))) - znames = c( - "Annual Returns in %", - "Std Devetions in %", - "Normalized Drawdown Drawdown in %" - ) - if(column == 1) { - resultingtable = data.frame(Value = z, row.names = znames) - } - else { - nextcolumn = data.frame(Value = z, row.names = znames) - resultingtable = cbind(resultingtable, nextcolumn) - } - } - colnames(resultingtable) = columnnames - ans = base::round(resultingtable, digits) - ans - # t <- seq(0, T, dt) - # matplot(t, r[1,1:T], type="l", lty=1, main="Short Rate Paths", ylab="rt") - - } - -############################################################################### -# R (http://r-project.org/) -# -# Copyright (c) 2004-2013 -# -# This R package is distributed under the terms of the GNU Public License (GPL) -# for full details see the file COPYING -# -# $Id: EMaxDDGBM -# -############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd 2013-08-28 08:55:04 UTC (rev 2916) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/AcarSim.Rd 2013-08-28 10:38:55 UTC (rev 2917) @@ -1,50 +1,50 @@ -\name{AcarSim} -\alias{AcarSim} -\title{Acar-Shane Maximum Loss Plot} -\usage{ - AcarSim(R) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} -} -\description{ - To get some insight on the relationships between maximum - drawdown per unit of volatility and mean return divided - by volatility, we have proceeded to Monte-Carlo - simulations. We have simulated cash flows over a period - of 36 monthly returns and measured maximum drawdown for - varied levels of annualised return divided by volatility - varying from minus \emph{two to two} by step of - \emph{0.1} . The process has been repeated \bold{six - thousand times}. -} -\details{ - Unfortunately, there is no \bold{analytical formulae} to - establish the maximum drawdown properties under the - random walk assumption. We should note first that due to - its definition, the maximum drawdown divided by - volatility is an only function of the ratio mean divided - by volatility. \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma - = F(\mu/\sigma)} Where j varies from 1 to n ,which is the - number of drawdown's in simulation -} -\examples{ -library(PerformanceAnalytics) -AcarSim(edhec) -} -\author{ - Peter Carl, Brian Peterson, Shubhankit Mohan -} -\references{ - Maximum Loss and Maximum Drawdown in Financial - Markets,\emph{International Conference Sponsored by BNP - and Imperial College on: Forecasting Financial Markets, - London, United Kingdom, May 1997} - \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} -} -\keyword{Drawdown} -\keyword{Loss} -\keyword{Maximum} -\keyword{Simulated} - +\name{AcarSim} +\alias{AcarSim} +\title{Acar-Shane Maximum Loss Plot} +\usage{ + AcarSim() +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} +} +\description{ + To get some insight on the relationships between maximum + drawdown per unit of volatility and mean return divided + by volatility, we have proceeded to Monte-Carlo + simulations. We have simulated cash flows over a period + of 36 monthly returns and measured maximum drawdown for + varied levels of annualised return divided by volatility + varying from minus \emph{two to two} by step of + \emph{0.1} . The process has been repeated \bold{six + thousand times}. +} +\details{ + Unfortunately, there is no \bold{analytical formulae} to + establish the maximum drawdown properties under the + random walk assumption. We should note first that due to + its definition, the maximum drawdown divided by + volatility can be interpreted as the only function of the + ratio mean divided by volatility. \deqn{MD/[\sigma]= Min + (\sum[X(j)])/\sigma = F(\mu/\sigma)} Where j varies from + 1 to n ,which is the number of drawdown's in simulation +} +\examples{ +library(PerformanceAnalytics) +AcarSim(edhec) +} +\author{ + Shubhankit Mohan +} +\references{ + Maximum Loss and Maximum Drawdown in Financial + Markets,\emph{International Conference Sponsored by BNP + and Imperial College on: Forecasting Financial Markets, + London, United Kingdom, May 1997} + \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} +} +\keyword{Drawdown} +\keyword{Loss} +\keyword{Maximum} +\keyword{Simulated} + Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd 2013-08-28 08:55:04 UTC (rev 2916) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/CalmarRatio.normalized.Rd 2013-08-28 10:38:55 UTC (rev 2917) @@ -1,77 +0,0 @@ -\name{QP.Norm} -\alias{Normalized.CalmarRatio} -\alias{Normalized.SterlingRatio} -\alias{QP.Norm} -\alias{SterlingRatio.Normalized} -\title{QP function fo calculation of Sharpe Ratio} -\usage{ - QP.Norm(R, tau, scale = NA) - - SterlingRatio.Normalized(R, tau = 1, scale = NA, - excess = 0.1) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{scale}{number of periods in a year (daily scale = - 252, monthly scale = 12, quarterly scale = 4)} - - \item{excess}{for Sterling Ratio, excess amount to add to - the max drawdown, traditionally and default .1 (10\%)} -} -\description{ - calculate a Normalized Calmar or Sterling reward/risk - ratio -} -\details{ - Normalized Calmar and Sterling Ratios are yet another - method of creating a risk-adjusted measure for ranking - investments similar to the \code{\link{SharpeRatio}}. - - Both the Normalized Calmar and the Sterling ratio are the - ratio of annualized return over the absolute value of the - maximum drawdown of an investment. The Sterling ratio - adds an excess risk measure to the maximum drawdown, - traditionally and defaulting to 10\%. - - It is also traditional to use a three year return series - for these calculations, although the functions included - here make no effort to determine the length of your - series. If you want to use a subset of your series, - you'll need to truncate or subset the input data to the - desired length. - - Many other measures have been proposed to do similar - reward to risk ranking. It is the opinion of this author - that newer measures such as Sortino's - \code{\link{UpsidePotentialRatio}} or Favre's modified - \code{\link{SharpeRatio}} are both \dQuote{better} - measures, and should be preferred to the Calmar or - Sterling Ratio. -} -\examples{ -data(managers) - Normalized.CalmarRatio(managers[,1,drop=FALSE]) - Normalized.CalmarRatio(managers[,1:6]) - Normalized.SterlingRatio(managers[,1,drop=FALSE]) - Normalized.SterlingRatio(managers[,1:6]) -} -\author{ - Brian G. Peterson -} -\references{ - Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, - Maximum drawdown. Risk Magazine, 01 Oct 2004. -} -\seealso{ - \code{\link{Return.annualized}}, \cr - \code{\link{maxDrawdown}}, \cr - \code{\link{SharpeRatio.modified}}, \cr - \code{\link{UpsidePotentialRatio}} -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.AcarSim.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.AcarSim.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/chart.AcarSim.Rd 2013-08-28 10:38:55 UTC (rev 2917) @@ -0,0 +1,50 @@ +\name{chart.AcarSim} +\alias{chart.AcarSim} +\title{Acar-Shane Maximum Loss Plot} +\usage{ + chart.AcarSim(R) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} +} +\description{ + To get some insight on the relationships between maximum + drawdown per unit of volatility and mean return divided + by volatility, we have proceeded to Monte-Carlo + simulations. We have simulated cash flows over a period + of 36 monthly returns and measured maximum drawdown for + varied levels of annualised return divided by volatility + varying from minus \emph{two to two} by step of + \emph{0.1} . The process has been repeated \bold{six + thousand times}. +} +\details{ + Unfortunately, there is no \bold{analytical formulae} to + establish the maximum drawdown properties under the + random walk assumption. We should note first that due to + its definition, the maximum drawdown divided by + volatility is an only function of the ratio mean divided + by volatility. \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma + = F(\mu/\sigma)} Where j varies from 1 to n ,which is the + number of drawdown's in simulation +} +\examples{ +library(PerformanceAnalytics) +chart.AcarSim(edhec) +} +\author{ + Shubhankit Mohan +} +\references{ + Maximum Loss and Maximum Drawdown in Financial + Markets,\emph{International Conference Sponsored by BNP + and Imperial College on: Forecasting Financial Markets, + London, United Kingdom, May 1997} + \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} +} +\keyword{Drawdown} +\keyword{Loss} +\keyword{Maximum} +\keyword{Simulated} + Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.normDD.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.normDD.Rd 2013-08-28 08:55:04 UTC (rev 2916) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/table.normDD.Rd 2013-08-28 10:38:55 UTC (rev 2917) @@ -1,56 +0,0 @@ -\name{table.NormDD} -\alias{table.NormDD} -\title{Generalised Lambda Distribution Simulated Drawdown} -\usage{ - table.NormDD(R, digits = 4) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} -} -\description{ - When selecting a hedge fund manager, one risk measure - investors often consider is drawdown. How should drawdown - distributions look? Carr Futures' Galen Burghardt, Ryan - Duncan and Lianyan Liu share some insights from their - research to show investors how to begin to answer this - tricky question -} -\details{ - To simulate net asset value (NAV) series where skewness - and kurtosis are zero, we draw sample returns from a - lognormal return distribution. To capture skewness and - kurtosis, we sample returns from a \bold{generalised - \eqn{\lambda} distribution}.The values of skewness and - excess kurtosis used were roughly consistent with the - range of values the paper observed for commodity trading - advisers in our database. The NAV series is constructed - from the return series. The simulated drawdowns are then - derived and used to produce the theoretical drawdown - distributions. A typical run usually requires - \bold{10,000} iterations to produce a smooth - distribution. -} -\author{ - Peter Carl, Brian Peterson, Shubhankit Mohan -} -\references{ - Burghardt, G., and L. Liu, \emph{ It's the - Autocorrelation, Stupid (November 2012) Newedge working - paper.} \code{\link[stats]{}} \cr - \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} - Burghardt, G., Duncan, R. and L. Liu, \eph{Deciphering - drawdown}. Risk magazine, Risk management for investors, - September, S16-S20, 2003. - \url{http://www.risk.net/data/risk/pdf/investor/0903_risk.pdf} -} -\seealso{ - Drawdowns.R -} -\keyword{Assumptions} -\keyword{Brownian} -\keyword{Drawdown} -\keyword{Motion} -\keyword{Simulated} -\keyword{Using} - From noreply at r-forge.r-project.org Wed Aug 28 20:02:06 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 20:02:06 +0200 (CEST) Subject: [Returnanalytics-commits] r2918 - in pkg/Meucci: . R demo man Message-ID: <20130828180206.69247183913@r-forge.r-project.org> Author: xavierv Date: 2013-08-28 20:02:06 +0200 (Wed, 28 Aug 2013) New Revision: 2918 Added: pkg/Meucci/R/EfficientFrontierReturns.R pkg/Meucci/R/EfficientFrontierReturnsBenchmark.R pkg/Meucci/demo/S_MeanVarianceBenchmark.R pkg/Meucci/demo/covNRets.Rda pkg/Meucci/man/EfficientFrontierReturns.Rd pkg/Meucci/man/EfficientFrontierReturnsBenchmark.Rd Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/NAMESPACE pkg/Meucci/R/ConvertChangeInYield2Price.R pkg/Meucci/R/MaxRsqCS.R pkg/Meucci/demo/S_CrossSectionConstrainedIndustries.R Log: - added S_MeanVarianceBenchmark demo script from chapter 6 and its associated functions Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-28 10:38:55 UTC (rev 2917) +++ pkg/Meucci/DESCRIPTION 2013-08-28 18:02:06 UTC (rev 2918) @@ -89,6 +89,8 @@ 'MvnRnd.R' 'MleRecursionForStudentT.R' 'CovertCompoundedReturns2Price.R' + 'MaxRsqCS.R' + 'EfficientFrontierReturnsBenchmark.R' + 'EfficientFrontierReturns.R' ' FitOrnsteinUhlenbeck.R' - 'MaxRsqCS.R' Modified: pkg/Meucci/NAMESPACE =================================================================== --- pkg/Meucci/NAMESPACE 2013-08-28 10:38:55 UTC (rev 2917) +++ pkg/Meucci/NAMESPACE 2013-08-28 18:02:06 UTC (rev 2918) @@ -10,6 +10,8 @@ export(ConvertCompoundedReturns2Price) export(Cumul2Raw) export(DetectOutliersViaMVE) +export(EfficientFrontierReturns) +export(EfficientFrontierReturnsBenchmark) export(EntropyProg) export(FitExpectationMaximization) export(FitMultivariateGarch) Modified: pkg/Meucci/R/ConvertChangeInYield2Price.R =================================================================== --- pkg/Meucci/R/ConvertChangeInYield2Price.R 2013-08-28 10:38:55 UTC (rev 2917) +++ pkg/Meucci/R/ConvertChangeInYield2Price.R 2013-08-28 18:02:06 UTC (rev 2918) @@ -21,8 +21,9 @@ ConvertChangeInYield2Price = function( Exp_DY, Cov_DY, Times2Mat, CurrentPrices ) { Mu = log( CurrentPrices ) - Times2Mat * Exp_DY; - Sigma = diag( Times2Mat^2 ) %*% Cov_DY; + Sigma = diag( Times2Mat ^ 2, length(Times2Mat) ) %*% Cov_DY; + Exp_Prices = exp(Mu + (1/2) * diag( Sigma )); Cov_Prices = exp(Mu + (1/2) * diag( Sigma )) %*% t(exp(Mu + (1/2) * diag(Sigma))) * ( exp( Sigma ) - 1); Added: pkg/Meucci/R/EfficientFrontierReturns.R =================================================================== --- pkg/Meucci/R/EfficientFrontierReturns.R (rev 0) +++ pkg/Meucci/R/EfficientFrontierReturns.R 2013-08-28 18:02:06 UTC (rev 2918) @@ -0,0 +1,97 @@ +#' Compute the mean-variance efficient frontier (on returns) by quadratic programming, as described in +#' A. Meucci "Risk and Asset Allocation", Springer, 2005 +#' +#' @param NumPortf : [scalar] number of portfolio in the efficient frontier +#' @param Covariance : [matrix] (N x N) covariance matrix +#' @param ExpectedValues : [vector] (N x 1) expected returns +#' @param Constraints : [struct] set of constraints. Default: weights sum to one, and no-short positions +#' +#' @return ExpectedValue : [vector] (NumPortf x 1) expected values of the portfolios +#' @return Volatility : [vector] (NumPortf x 1) standard deviations of the portfolios +#' @return Composition : [matrix] (NumPortf x N) optimal portfolios +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "EfficientFrontierReturns.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +EfficientFrontierReturns = function(NumPortf, Covariance, ExpectedValues, Constraints = NULL) +{ + + NumAssets = ncol(Covariance); + + ################################################################################################################## + # determine return of minimum-risk portfolio + FirstDegree = matrix( 0, NumAssets, 1); + SecondDegree = Covariance; + if( !length(Constraints) ) + { + Aeq = matrix( 1, 1, NumAssets); + beq = 1; + A = -diag( 1, NumAssets); # no-short constraint + b = matrix( 0, NumAssets, 1); # no-short constraint + }else + { + Aeq = Constraints$Aeq; + beq = Constraints$beq; + A = Constraints$Aleq; # no-short constraint + b = Constraints$bleq; # no-short constraint + } + + x0 = 1 / NumAssets * matrix( 1, NumAssets, 1); + Amat = rbind( Aeq, A); + bvec = rbind( beq, b); +###### MinVol_Weights = quadprog( SecondDegree, -FirstDegree, A, b, Aeq, beq, [], [], x0, options ); + MinVol_Weights = matrix( solve.QP( Dmat = SecondDegree, dvec = -FirstDegree, Amat = -t(Amat), bvec = -bvec, meq = length( beq ) )$solution ); + MinVol_Return = t( MinVol_Weights ) %*% ExpectedValues; + + ################################################################################################################## + ### Determine return of maximum-return portfolio + MaxRet_Return = max(ExpectedValues); + MaxRet_Index = which( ExpectedValues == max(ExpectedValues) ); + ################################################################################################################## + ### Slice efficient frontier in NumPortf equally thick horizontal sectors in the upper branch only + Step = (MaxRet_Return - MinVol_Return) / (NumPortf - 1); + TargetReturns = seq( MinVol_Return, MaxRet_Return, Step ); + + ################################################################################################################## + ### Compute the NumPortf compositions and risk-return coordinates of the optimal allocations relative to each slice + + # initialization + Composition = matrix( NaN, NumPortf, NumAssets); + Volatility = matrix( NaN, NumPortf, 1); + ExpectedValue = matrix( NaN, NumPortf, 1); + + # start with min vol portfolio + Composition[ 1, ] = t(MinVol_Weights); + Volatility[ 1 ] = sqrt(t(MinVol_Weights) %*% Covariance %*% MinVol_Weights); + ExpectedValue[ 1 ] = t(MinVol_Weights) %*% ExpectedValues; + + for( i in 2 : (NumPortf - 1) ) + { + # determine least risky portfolio for given expected return + AEq = rbind( matrix( 1, 1, NumAssets), t(ExpectedValues) ); + bEq = rbind( 1, TargetReturns[ i ]); + Amat = rbind( AEq, A); + bvec = rbind( bEq, b) + + Weights = t( solve.QP( Dmat = SecondDegree, dvec = -FirstDegree, Amat = -t(Amat), bvec = -bvec, meq = length( bEq ) )$solution ); + #Weights = t(quadprog(SecondDegree, FirstDegree, A, b, AEq, bEq, [], [], x0, options)); + + Composition[ i, ] = Weights; + Volatility[ i ] = sqrt( Weights %*% Covariance %*% t(Weights)); + ExpectedValue[ i ] = Weights %*% ExpectedValues; + } + + # add max ret portfolio + Weights = matrix( 0, 1, NumAssets); + Weights[ MaxRet_Index ] = 1; + Composition[ nrow(Composition), ] = Weights; + Volatility[ length(Volatility) ] = sqrt(Weights %*% Covariance %*% t(Weights)); + ExpectedValue[ length(ExpectedValue) ] = Weights %*% ExpectedValues; + + return( list( ExpectedValue = ExpectedValue, Volatility = Volatility, Composition = Composition ) ); + +} \ No newline at end of file Added: pkg/Meucci/R/EfficientFrontierReturnsBenchmark.R =================================================================== --- pkg/Meucci/R/EfficientFrontierReturnsBenchmark.R (rev 0) +++ pkg/Meucci/R/EfficientFrontierReturnsBenchmark.R 2013-08-28 18:02:06 UTC (rev 2918) @@ -0,0 +1,98 @@ +#' Compute the mean-variance efficient frontier (on returns) by quadratic programming, as described in +#' A. Meucci "Risk and Asset Allocation", Springer, 2005 +#' +#' @param NumPortf : [scalar] number of portfolio in the efficient frontier +#' @param Covariance : [matrix] (N x N) covariance matrix +#' @param ExpectedValues : [vector] (N x 1) expected returns +#' @param Benchmark : [vector] (N x 1) of benchmark weights +#' @param Constraints : [struct] set of constraints. Default: weights sum to one, and no-short positions +#' +#' @return ExpectedValue : [vector] (NumPortf x 1) expected values of the portfolios +#' @return Volatility : [vector] (NumPortf x 1) standard deviations of the portfolios +#' @return Composition : [matrix] (NumPortf x N) optimal portfolios +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "EfficientFrontierReturnsBenchmark.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +EfficientFrontierReturnsBenchmark = function(NumPortf, Covariance, ExpectedValues, Benchmark, Constraints = NULL) +{ + + NumAssets = ncol(Covariance); + + ################################################################################################################## + # determine return of minimum-risk portfolio + FirstDegree = -Covariance %*% Benchmark; + SecondDegree = Covariance; + if( !length(Constraints) ) + { + Aeq = matrix( 1, 1, NumAssets); + beq = 1; + A = -diag( 1, NumAssets); # no-short constraint + b = matrix( 0, NumAssets, 1); # no-short constraint + }else + { + Aeq = Constraints$Aeq; + beq = Constraints$beq; + A = Constraints$Aleq; # no-short constraint + b = Constraints$bleq; # no-short constraint + } + + Amat = rbind( Aeq, A); + bvec = rbind( beq, b); + +###### MinVol_Weights = quadprog( SecondDegree, -FirstDegree, A, b, Aeq, beq, [], [], x0, options ); + MinVol_Weights = matrix( solve.QP( Dmat = SecondDegree, dvec = -FirstDegree, Amat = -t(Amat), bvec = -bvec, meq = length( beq ) )$solution ); + MinVol_Return = t( MinVol_Weights ) %*% ExpectedValues; + + ################################################################################################################## + ### Determine return of maximum-return portfolio + MaxRet_Return = max(ExpectedValues); + MaxRet_Index = which( ExpectedValues == max(ExpectedValues) ); + ################################################################################################################## + ### Slice efficient frontier in NumPortf equally thick horizontal sectors in the upper branch only + Step = (MaxRet_Return - MinVol_Return) / (NumPortf - 1); + TargetReturns = seq( MinVol_Return, MaxRet_Return, Step ); + + ################################################################################################################## + ### Compute the NumPortf compositions and risk-return coordinates of the optimal allocations relative to each slice + + # initialization + Composition = matrix( NaN, NumPortf, NumAssets); + Volatility = matrix( NaN, NumPortf, 1); + ExpectedValue = matrix( NaN, NumPortf, 1); + + # start with min vol portfolio + Composition[ 1, ] = t(MinVol_Weights); + Volatility[ 1 ] = sqrt(t(MinVol_Weights) %*% Covariance %*% MinVol_Weights); + ExpectedValue[ 1 ] = t(MinVol_Weights) %*% ExpectedValues; + + for( i in 2 : (NumPortf - 1) ) + { + # determine least risky portfolio for given expected return + AEq = rbind( matrix( 1, 1, NumAssets), t(ExpectedValues) ); + bEq = rbind( 1, TargetReturns[ i ]); + Amat = rbind( AEq, A); + bvec = rbind( bEq, b) + + Weights = t( solve.QP( Dmat = SecondDegree, dvec = -FirstDegree, Amat = -t(Amat), bvec = -bvec, meq = length( bEq ) )$solution ); + #Weights = t(quadprog(SecondDegree, FirstDegree, A, b, AEq, bEq, [], [], x0, options)); + + Composition[ i, ] = Weights; + Volatility[ i ] = sqrt( Weights %*% Covariance %*% t(Weights)); + ExpectedValue[ i ] = Weights %*% ExpectedValues; + } + + # add max ret portfolio + Weights = matrix( 0, 1, NumAssets); + Weights[ MaxRet_Index ] = 1; + Composition[ nrow(Composition), ] = Weights; + Volatility[ length(Volatility) ] = sqrt(Weights %*% Covariance %*% t(Weights)); + ExpectedValue[ length(ExpectedValue) ] = Weights %*% ExpectedValues; + + return( list( ExpectedValue = ExpectedValue, Volatility = Volatility, Composition = Composition ) ); + +} \ No newline at end of file Modified: pkg/Meucci/R/MaxRsqCS.R =================================================================== --- pkg/Meucci/R/MaxRsqCS.R 2013-08-28 10:38:55 UTC (rev 2917) +++ pkg/Meucci/R/MaxRsqCS.R 2013-08-28 18:02:06 UTC (rev 2918) @@ -109,7 +109,6 @@ b = ipop( c = matrix( FirstDegree ), H = SecondDegree, A = Amat, b = bvec, l = lb , u = ub , r = rep(0, length(bvec)) ) - # reshape for output G = t( matrix( attributes(b)$primal, N, ) ); Modified: pkg/Meucci/demo/S_CrossSectionConstrainedIndustries.R =================================================================== --- pkg/Meucci/demo/S_CrossSectionConstrainedIndustries.R 2013-08-28 10:38:55 UTC (rev 2917) +++ pkg/Meucci/demo/S_CrossSectionConstrainedIndustries.R 2013-08-28 18:02:06 UTC (rev 2918) @@ -1,9 +1,3 @@ - -################################################################################################################## -### This script fits a cross-sectional linear factor model creating industry factors, -### -### == Chapter 3 == -################################################################################################################## #' This script fits a cross-sectional linear factor model creating industry factors, where the industry factors #' are constrained to be uncorrelated with the market as described in A. Meucci, "Risk and Asset Allocation", #' Springer, 2005, Chapter 3. @@ -49,6 +43,8 @@ #BOUNDARIES lb = -100 ub = 700 + +#THE ROWS 3 and 6 should be 0 and instead of it we got outliers. G = MaxRsqCS(X, B, W, A, D, Aeq, Deq, lb, ub); # compute intercept a and residual U Added: pkg/Meucci/demo/S_MeanVarianceBenchmark.R =================================================================== --- pkg/Meucci/demo/S_MeanVarianceBenchmark.R (rev 0) +++ pkg/Meucci/demo/S_MeanVarianceBenchmark.R 2013-08-28 18:02:06 UTC (rev 2918) @@ -0,0 +1,144 @@ +#' This script projects the distribution of the market invariants for the bond and stock markets +#' (i.e. the changes in yield to maturity and compounded returns) from the estimation interval to the investment horizon. +#' Then it computes the distribution of prices at the investment horizon and translates this distribution into the returns +#' distribution. +#' Finally, it computes the mean-variance efficient frontier both for a total-return and for a benchmark-driven investor +#' Described in A. Meucci,"Risk and Asset Allocation", Springer, 2005, Chapter 6. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_MeanVarianceBenchmark.m" +# +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Load data +load("../data/stockSeries.Rda"); + +################################################################################################################### +### Inputs + +tau = 4 / 52; # time to horizon expressed in years +tau_tilde = 1 / 52; # estimation period expressed in years +FlatCurve = 0.04; +TimeToMat = 5 / 52; # time to maturity of bond expressed in years + +# parameters of the distribution of the changes in yield to maturity +u_minus_tau = TimeToMat - tau; +mu = 0 * u_minus_tau; +sigma=( 20 + 5 / 4 * u_minus_tau ) / 10000; + +nSim = 100000; +Budget = 100; + +################################################################################################################## +### Estimation of weekly invariants stock market (compounded returns) +Week_C = log( StockSeries$Prices.TimeSeries[ -1, ]) - log( StockSeries$Prices.TimeSeries[-nrow(StockSeries$Prices.TimeSeries), ] ); +N = ncol(Week_C); + +la = 0.1; +Shrk_Exp = matrix( 0, N, 1 ); +Exp_C_Hat = ( 1 - la ) * matrix( apply( Week_C, 2, mean ) ) + la * Shrk_Exp; + +lb = 0.1; +Shrk_Cov = diag( 1, N ) * sum(diag(cov(Week_C))) / N; +Cov_C_Hat = ( 1 - lb ) * cov(Week_C) + lb * (Shrk_Cov); + +######################################################################################################(############ +### Stock market projection to horizon and pricing +Exp_Hrzn_C_Hat = Exp_C_Hat * tau / tau_tilde; +Cov_Hrzn_C_Hat = Cov_C_Hat * tau / tau_tilde; +StockCompReturns_Scenarios = rmvnorm( nSim, Exp_Hrzn_C_Hat, Cov_Hrzn_C_Hat); + +StockCurrent_Prices = matrix( StockSeries$Prices.TimeSeries[ nrow(StockSeries$Prices.TimeSeries), ]); +StockMarket_Scenarios = ( matrix( 1, nSim, 1) %*% t(StockCurrent_Prices)) * exp( StockCompReturns_Scenarios ); + +################################################################################################################## +# MV inputs - analytical +Stock = ConvertCompoundedReturns2Price(Exp_Hrzn_C_Hat, Cov_Hrzn_C_Hat, StockCurrent_Prices); +print( Stock$Exp_Prices ); +print( Stock$Cov_Prices ); + +################################################################################################################## +# MV inputs - numerical +StockExp_Prices = matrix( apply( StockMarket_Scenarios, 2, mean )); +StockCov_Prices = cov( StockMarket_Scenarios ); +print(StockExp_Prices); +print(StockCov_Prices); + +################################################################################################################## +### Bond market projection to horizon and pricing +BondCurrent_Prices_Shifted = exp( -FlatCurve * u_minus_tau); +BondCurrent_Prices = exp( -FlatCurve * TimeToMat); + +# generate changes in yield-to-maturity +DY_Scenarios = matrix( rnorm( nSim, mu * tau / tau_tilde, sigma * sqrt(tau / tau_tilde)) ); +# compute the horizon prices, (3.81) in "Risk and Asset Allocation" - Springer +X = -u_minus_tau * DY_Scenarios; +BondMarket_Scenarios = BondCurrent_Prices_Shifted * exp(X); + +# MV inputs - analytical +Exp_Hrzn_DY_Hat = mu * tau / tau_tilde; +SDev_Hrzn_DY_Hat = sigma * sqrt(tau / tau_tilde); +Cov_Hrzn_DY_Hat = diag( SDev_Hrzn_DY_Hat, length(SDev_Hrzn_DY_Hat) ) %*% diag( SDev_Hrzn_DY_Hat, length(SDev_Hrzn_DY_Hat) ); +Bond = ConvertChangeInYield2Price(Exp_Hrzn_DY_Hat, Cov_Hrzn_DY_Hat, u_minus_tau, BondCurrent_Prices_Shifted); +print(Bond$Exp_Prices); +print(Bond$Cov_Prices); + +# MV inputs - numerical +BondExp_Prices = t( mean( BondMarket_Scenarios )); +BondCov_Prices = cov(BondMarket_Scenarios); + +################################################################################################################## +### Put market together and compute returns +Current_Prices = rbind( StockCurrent_Prices, BondCurrent_Prices); +Prices_Scenarios = cbind( StockMarket_Scenarios, BondMarket_Scenarios ) +Rets_Scenarios = Prices_Scenarios / (matrix( 1, nSim, 1 ) %*% t(Current_Prices)) - 1; +E = matrix(apply( Rets_Scenarios, 2, mean)); +S = cov(Rets_Scenarios); + +N = ncol(StockSeries$Prices.TimeSeries) + 1; +w_b = matrix( 1, N, 1) / N; # relative benchmar weights + +################################################################################################################## +### Portolio optimization +# MV total return quadratic optimization to determine one-parameter frontier of quasi-optimal solutions +NumPortf = 40; +Ef = EfficientFrontierReturns( NumPortf, S, E ); +Rel_ExpectedValue = array( 0, NumPortf ); +Rel_Std_Deviation = array( 0, NumPortf ); +for( k in 1 : NumPortf ) +{ + Rel_ExpectedValue[ k ] = t( Ef$Composition[ k, ] - w_b) %*% E; + Rel_Std_Deviation[ k ] = sqrt(t( Ef$Composition[ k, ] - w_b ) %*% S %*% ( Ef$Composition[ k, ] - w_b ) ); +} + +################################################################################################################## +### Benchmark-relative statistics +# MV benchmark-relative quadratic optimization to determine one-parameter frontier of quasi-optimal solutions +Ef_b = EfficientFrontierReturnsBenchmark(NumPortf, S, E, w_b); +Rel_ExpectedValue_b = array( 0, NumPortf ); +Rel_Std_Deviation_b = array( 0, NumPortf ); +for( k in 1 : NumPortf ) +{ + Rel_ExpectedValue_b[ k ] = t( Ef_b$Composition[ k, ] - w_b ) %*% E; + Rel_Std_Deviation_b[ k ] = sqrt(t( Ef_b$Composition[ k, ] - w_b ) %*% S %*% ( Ef_b$Composition[ k, ] - w_b ) ); +} + +################################################################################################################## +### Plots +# frontiers in total return space +dev.new(); +plot( Ef$Volatility, Ef$ExpectedValue, type = "l", lwd = 2, col = "blue", xlab = "st.dev. rets.", ylab = "exp.val rets.", + xlim =c( Ef_b$Volatility[1], Ef_b$Volatility[length(Ef_b$Volatility)] ), ylim = c( min(Ef_b$ExpectedValue), max(Ef_b$ExpectedValue)) ); +lines( Ef_b$Volatility , Ef_b$ExpectedValue, type = "l", lwd = 2, col = "red" ); +legend( "topleft", 1.9, c( "total ret", "relative" ), col = c( "blue","red" ), + lty=1, bg = "gray90" ); + +# frontiers in relative return space +dev.new(); +plot( Rel_Std_Deviation, Rel_ExpectedValue, type = "l", lwd = 2, col = "blue", xlab = "TE rets.", ylab = "EOP rets.", + xlim =c( Rel_Std_Deviation_b[1], Rel_Std_Deviation_b[length(Rel_Std_Deviation_b)] ), ylim = c( min( Rel_ExpectedValue_b ), max( Rel_ExpectedValue_b )) );); +lines( Rel_Std_Deviation_b, Rel_ExpectedValue_b, lwd = 2, col = "red" ); +legend( "topleft", 1.9, c( "total ret", "relative" ), col = c( "blue","red" ), + lty=1, bg = "gray90" ); Added: pkg/Meucci/demo/covNRets.Rda =================================================================== (Binary files differ) Property changes on: pkg/Meucci/demo/covNRets.Rda ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/Meucci/man/EfficientFrontierReturns.Rd =================================================================== --- pkg/Meucci/man/EfficientFrontierReturns.Rd (rev 0) +++ pkg/Meucci/man/EfficientFrontierReturns.Rd 2013-08-28 18:02:06 UTC (rev 2918) @@ -0,0 +1,42 @@ +\name{EfficientFrontierReturns} +\alias{EfficientFrontierReturns} +\title{Compute the mean-variance efficient frontier (on returns) by quadratic programming, as described in +A. Meucci "Risk and Asset Allocation", Springer, 2005} +\usage{ + EfficientFrontierReturns(NumPortf, Covariance, + ExpectedValues, Constraints = NULL) +} +\arguments{ + \item{NumPortf}{: [scalar] number of portfolio in the + efficient frontier} + + \item{Covariance}{: [matrix] (N x N) covariance matrix} + + \item{ExpectedValues}{: [vector] (N x 1) expected + returns} + + \item{Constraints}{: [struct] set of constraints. + Default: weights sum to one, and no-short positions} +} +\value{ + ExpectedValue : [vector] (NumPortf x 1) expected values + of the portfolios + + Volatility : [vector] (NumPortf x 1) standard deviations + of the portfolios + + Composition : [matrix] (NumPortf x N) optimal portfolios +} +\description{ + Compute the mean-variance efficient frontier (on returns) + by quadratic programming, as described in A. Meucci "Risk + and Asset Allocation", Springer, 2005 +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "EfficientFrontierReturns.m" +} + Added: pkg/Meucci/man/EfficientFrontierReturnsBenchmark.Rd =================================================================== --- pkg/Meucci/man/EfficientFrontierReturnsBenchmark.Rd (rev 0) +++ pkg/Meucci/man/EfficientFrontierReturnsBenchmark.Rd 2013-08-28 18:02:06 UTC (rev 2918) @@ -0,0 +1,44 @@ +\name{EfficientFrontierReturnsBenchmark} +\alias{EfficientFrontierReturnsBenchmark} +\title{Compute the mean-variance efficient frontier (on returns) by quadratic programming, as described in +A. Meucci "Risk and Asset Allocation", Springer, 2005} +\usage{ + EfficientFrontierReturnsBenchmark(NumPortf, Covariance, + ExpectedValues, Benchmark, Constraints = NULL) +} +\arguments{ + \item{NumPortf}{: [scalar] number of portfolio in the + efficient frontier} + + \item{Covariance}{: [matrix] (N x N) covariance matrix} + + \item{ExpectedValues}{: [vector] (N x 1) expected + returns} + + \item{Benchmark}{: [vector] (N x 1) of benchmark weights} + + \item{Constraints}{: [struct] set of constraints. + Default: weights sum to one, and no-short positions} +} +\value{ + ExpectedValue : [vector] (NumPortf x 1) expected values + of the portfolios + + Volatility : [vector] (NumPortf x 1) standard deviations + of the portfolios + + Composition : [matrix] (NumPortf x N) optimal portfolios +} +\description{ + Compute the mean-variance efficient frontier (on returns) + by quadratic programming, as described in A. Meucci "Risk + and Asset Allocation", Springer, 2005 +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "EfficientFrontierReturnsBenchmark.m" +} + From noreply at r-forge.r-project.org Wed Aug 28 20:12:16 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Wed, 28 Aug 2013 20:12:16 +0200 (CEST) Subject: [Returnanalytics-commits] r2919 - in pkg/Meucci: . R demo man Message-ID: <20130828181216.20CC318429E@r-forge.r-project.org> Author: xavierv Date: 2013-08-28 20:12:15 +0200 (Wed, 28 Aug 2013) New Revision: 2919 Added: pkg/Meucci/R/EfficientFrontierPrices.R pkg/Meucci/demo/S_MeanVarianceHorizon.R pkg/Meucci/demo/S_MeanVarianceOptimization.R pkg/Meucci/man/EfficientFrontierPrices.Rd Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/NAMESPACE Log: - added S_MeanVarianceHorizon and S_MeanVarianceOptimization demo scripts from chapter 6 and its associated functions Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-28 18:02:06 UTC (rev 2918) +++ pkg/Meucci/DESCRIPTION 2013-08-28 18:12:15 UTC (rev 2919) @@ -92,5 +92,6 @@ 'MaxRsqCS.R' 'EfficientFrontierReturnsBenchmark.R' 'EfficientFrontierReturns.R' + 'EfficientFrontierPrices.R' ' FitOrnsteinUhlenbeck.R' Modified: pkg/Meucci/NAMESPACE =================================================================== --- pkg/Meucci/NAMESPACE 2013-08-28 18:02:06 UTC (rev 2918) +++ pkg/Meucci/NAMESPACE 2013-08-28 18:12:15 UTC (rev 2919) @@ -10,6 +10,7 @@ export(ConvertCompoundedReturns2Price) export(Cumul2Raw) export(DetectOutliersViaMVE) +export(EfficientFrontierPrices) export(EfficientFrontierReturns) export(EfficientFrontierReturnsBenchmark) export(EntropyProg) Added: pkg/Meucci/R/EfficientFrontierPrices.R =================================================================== --- pkg/Meucci/R/EfficientFrontierPrices.R (rev 0) +++ pkg/Meucci/R/EfficientFrontierPrices.R 2013-08-28 18:12:15 UTC (rev 2919) @@ -0,0 +1,88 @@ +#' Compute the mean-variance efficient frontier (on prices) by quadratic programming, as described in +#' A. Meucci "Risk and Asset Allocation", Springer, 2005 +#' +#' @param NumPortf : [scalar] number of portfolio in the efficient frontier +#' @param Covariance : [matrix] (N x N) covariance matrix +#' @param ExpectedValues : [vector] (N x 1) expected returns +#' @param Current_Prices : [vector] (N x 1) current prices +#' @param Budget : [scalar] budget constraint +#' +#' @return ExpectedValue : [vector] (NumPortf x 1) expected values of the portfolios +#' @return Std_Deviation : [vector] (NumPortf x 1) standard deviations of the portfolios +#' @return Composition : [matrix] (NumPortf x N) optimal portfolios +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "EfficientFrontierReturns.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +EfficientFrontierPrices = function( NumPortf, Covariance, ExpectedValues, Current_Prices, Budget ) +{ + + NumAssets = ncol( Covariance ); + + ################################################################################################################## + ### Determine exp value of minimum-variance portfolio + FirstDegree = matrix( 0, NumAssets, 1 ); + SecondDegree = Covariance; + Aeq = t( Current_Prices ); + beq = Budget; + A = -diag( 1, NumAssets ); + b = matrix( 0, NumAssets, 1 ); + Amat = rbind( Aeq, A); + bvec = rbind( beq, b); + #x0 = Budget / NumAssets * matrix( 1 , NumAssets, 1 ); + MinVol_Allocation = matrix( solve.QP( Dmat = SecondDegree, dvec = -FirstDegree, Amat = -t(Amat), bvec = -bvec, meq = length( beq ) )$solution ); + MinVol_ExpVal = t( MinVol_Allocation ) %*% ExpectedValues; + + ################################################################################################################## + ### Determine exp value of maximum-expected value portfolio + Max_ExpVal = Budget * apply( ExpectedValues / Current_Prices, 2, max ); + + ################################################################################################################## + ### Slice efficient frontier in NumPortf equally thick horizontal sectors in the upper branch only + Target_ExpectedValues = MinVol_ExpVal + ( 0 : NumPortf ) * ( Max_ExpVal - MinVol_ExpVal ) / NumPortf; + + ################################################################################################################## + ### Compute the NumPortf compositions and risk-return coordinates + Composition = matrix( NaN, NumPortf, NumAssets ); + Std_Deviation = matrix( NaN, NumPortf, 1 ); + ExpectedValue = matrix( NaN, NumPortf, 1 ); + + Min_ExpectedValue = min(ExpectedValues); + Max_ExpectedValue = max(ExpectedValues); + + IndexMin = which.min(ExpectedValues); + IndexMax = which.max(ExpectedValues); + + for( i in 1 : NumPortf ) + { + # determine initial condition + Matrix = rbind( cbind( Min_ExpectedValue, Max_ExpectedValue ), + cbind( Current_Prices[ IndexMin ], Current_Prices[ IndexMax ] ) ); + + Allocation_0_MinMax = solve( Matrix ) %*% rbind( Target_ExpectedValues[ i ], Budget ); + + Allocation_0 = matrix( 0, NumAssets, 1 ); + Allocation_0[ IndexMin ] = Allocation_0_MinMax[ 1 ]; + Allocation_0[ IndexMax ] = Allocation_0_MinMax[ 2 ]; + + # determine least risky portfolio for given expected return + AEq = rbind( Aeq, t(ExpectedValues) ); + bEq = rbind( beq, Target_ExpectedValues[ i ] ); + Amat = rbind( AEq, A); + bvec = rbind( bEq, b); + + #options = optimset('Algorithm', 'medium-scale'); + Allocation = t( solve.QP( Dmat = SecondDegree, dvec = -FirstDegree, Amat = -t(Amat), bvec = -bvec, meq = length( bEq ) )$solution ); + + # store + Composition[ i, ] = Allocation; + Std_Deviation[ i ] = sqrt(Allocation %*% Covariance %*% t(Allocation) ); + ExpectedValue[ i ] = Target_ExpectedValues[ i ]; + } + + return( list( ExpectedValue = ExpectedValue, Std_Deviation = Std_Deviation, Composition = Composition ) ); +} \ No newline at end of file Added: pkg/Meucci/demo/S_MeanVarianceHorizon.R =================================================================== --- pkg/Meucci/demo/S_MeanVarianceHorizon.R (rev 0) +++ pkg/Meucci/demo/S_MeanVarianceHorizon.R 2013-08-28 18:12:15 UTC (rev 2919) @@ -0,0 +1,97 @@ +#' This script projects the distribution of the market invariants for stock market (i.e. compounded returns) +#' from the estimation interval to the investment horizon. +#' Then it computes the distribution of prices at the investment horizon and performs the two-step mean-variance +#' optimization in terms of returns and relative portfolio weights. +#' Described in A. Meucci,"Risk and Asset Allocation", Springer, 2005, Chapter 6. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_MeanVarianceHorizon.m" +# +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Load data +load("../data/stockSeries.Rda"); + +################################################################################################################## +### Inputs +tau = 1/252; # time to horizon expressed in years +tau_tilde = 1/52; # estimation period expressed in years +nSim = 10000; +Budget = 100; +Zeta = 10; # risk aversion parameter + +################################################################################################################## +### Estimation of weekly invariants stock market (compounded returns) +Week_C = diff( log( StockSeries$Prices.TimeSeries ) ); +N = ncol( Week_C ); + +la = 0.1; +Shrk_Exp = matrix( 0, N, 1); +Exp_C_Hat = (1 - la) * matrix( apply( Week_C, 2, mean ) ) + la * Shrk_Exp; + +lb = 0.1; +Shrk_Cov = diag( 1, N ) * sum( diag( cov( Week_C ) ) ) / N; +Cov_C_Hat = (1 - lb) * cov(Week_C) + lb * (Shrk_Cov); + +################################################################################################################## +### Stock market projection to horizon and pricing +Exp_Hrzn_C_Hat = Exp_C_Hat * tau / tau_tilde; +Cov_Hrzn_C_Hat = Cov_C_Hat * tau / tau_tilde; +StockCompReturns_Scenarios = rmvnorm( nSim, Exp_Hrzn_C_Hat, Cov_Hrzn_C_Hat); + +StockCurrent_Prices = matrix( StockSeries$Prices.TimeSeries[ nrow( StockSeries$Prices.TimeSeries ), ]); +StockMarket_Scenarios = ( matrix( 1, nSim, 1) %*% t(StockCurrent_Prices)) * exp( StockCompReturns_Scenarios ); +################################################################################################################## +### MV inputs - analytical +Stock = ConvertCompoundedReturns2Price(Exp_Hrzn_C_Hat, Cov_Hrzn_C_Hat, StockCurrent_Prices); +print( Stock$Exp_Prices ); +print( Stock$Cov_Prices ); + +################################################################################################################## +### MV inputs - numerical +StockExp_Prices = matrix( apply( StockMarket_Scenarios, 2, mean )); +StockCov_Prices = cov( StockMarket_Scenarios ); +print(StockExp_Prices); +print(StockCov_Prices); + +StockExp_LinRets = StockExp_Prices / StockCurrent_Prices - 1; +StockCov_LinRets = diag( c(1 / StockCurrent_Prices) ) %*% StockCov_Prices %*% diag( c(1 / StockCurrent_Prices) ); + +################################################################################################################## +### Portolio optimization +# step 1: MV quadratic optimization to determine one-parameter frontier of quasi-optimal solutions ... +NumPortf = 40; +EFR = EfficientFrontierReturns( NumPortf, StockCov_LinRets, StockExp_LinRets ); + +# step 2: ...evaluate satisfaction for all allocations on the frontier ... +Store_Satisfaction = NULL; +for( n in 1 : NumPortf ) +{ + Allocation = matrix( EFR$Composition[ n, ] ) * Budget / StockCurrent_Prices; + Objective_Scenario = StockMarket_Scenarios %*% Allocation; + Utility = -exp( -1 / Zeta * Objective_Scenario); + ExpU = apply( Utility, 2, mean ); + Satisfaction = -Zeta * log( -ExpU ); + Store_Satisfaction = cbind( Store_Satisfaction, Satisfaction ); ##ok +} + +# ... and pick the best +Optimal_Index = which.max(Store_Satisfaction); +Optimal_Allocation = EFR$Composition[ Optimal_Index, ]; + +################################################################################################################## +### Plots +dev.new(); + +par(mfrow = c( 2, 1 ) ); +# rets MV frontier +h = plot(EFR$Volatility, EFR$ExpectedValue, "l", lwd = 2, xlab = "st.dev. rets.", ylab = "exp.val rets.", + xlim = c( EFR$Volatility[1], EFR$Volatility[ length(EFR$Volatility) ]), ylim = c( min( EFR$ExpectedValue ), max( EFR$ExpectedValue ) ) ); + + +# satisfaction as function of st.deviation on the frontier +h = plot( EFR$Volatility, Store_Satisfaction, "l", lwd = 2, xlab = "st.dev. rets.", ylab = "satisfaction", + xlim = c( EFR$Volatility[1], EFR$Volatility[ length(EFR$Volatility) ]), ylim = c( min(Store_Satisfaction), max(Store_Satisfaction) ) ); + Added: pkg/Meucci/demo/S_MeanVarianceOptimization.R =================================================================== --- pkg/Meucci/demo/S_MeanVarianceOptimization.R (rev 0) +++ pkg/Meucci/demo/S_MeanVarianceOptimization.R 2013-08-28 18:12:15 UTC (rev 2919) @@ -0,0 +1,158 @@ +#' This script projects the distribution of the market invariants for the bond and stock markets +#' (i.e. the changes in yield to maturity and compounded returns) from the estimation interval to the investment +#' horizon +#' Then it computes the distribution of prices at the investment horizon and performs the two-step mean-variance +#' optimization. +#' Described in A. Meucci,"Risk and Asset Allocation", Springer, 2005, Chapter 6. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_MeanVarianceHorizon.m" +# +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Load data +load( "../data/stockSeries.Rda" ); + +################################################################################################################## +### Inputs +tau = 4 / 52; # time to horizon expressed in years +tau_tilde = 1 / 52; # estimation period expressed in years + +FlatCurve = 0.04; +TimesToMat = c( 4, 5, 10, 52, 520 ) / 52; # time to maturity of selected bonds expressed in years + +# parameters of the distribution of the changes in yield to maturity +u_minus_tau = TimesToMat - tau; +nu = 8; +mus = 0 * u_minus_tau; +sigmas = ( 20 + 5 / 4 * u_minus_tau ) / 10000; + +Num_Scenarios = 100000; +Budget = 100; +Zeta = 10; # risk aversion parameter + +################################################################################################################## +### Estimation of weekly invariants stock market (compounded returns) +Week_C = diff( log( StockSeries$Prices.TimeSeries ) ); +T = dim( Week_C )[1]; +N = dim( Week_C )[2] + +# shrinkage estimator of mean vector +la = 0.1; +Shrk_Exp = matrix( 0, N, 1 ); +Exp_C_Hat = (1 - la) * matrix( apply( Week_C, 2, mean) ) + la * Shrk_Exp; + +# shrinkage estimator of covariance +lb = 0.1; +Shrk_Cov = diag( 1, N ) * sum( diag( cov( Week_C ) ) ) / N; +Cov_C_Hat = (1 - lb) * cov( Week_C ) + lb * ( Shrk_Cov ); + +################################################################################################################## +### Stock market projection to horizon and pricing +Exp_Hrzn_C_Hat = Exp_C_Hat * tau / tau_tilde; +Cov_Hrzn_C_Hat = Cov_C_Hat * tau / tau_tilde; +StockCompReturns_Scenarios = rmvnorm( Num_Scenarios, Exp_Hrzn_C_Hat, Cov_Hrzn_C_Hat ); + +StockCurrent_Prices = matrix( StockSeries$Prices.TimeSeries[ ncol(StockSeries$Prices.TimeSeries), ] ); +StockMarket_Scenarios = ( matrix( 1, Num_Scenarios, 1 ) %*% t( StockCurrent_Prices ) ) * exp( StockCompReturns_Scenarios ); + +################################################################################################################## +### MV inputs +# analytical +CCR2P = ConvertCompoundedReturns2Price( Exp_Hrzn_C_Hat, Cov_Hrzn_C_Hat, StockCurrent_Prices ); +print( CCR2P$Exp_Prices ); +print( CCR2P$Cov_Prices ); + +# numerical +StockExp_Prices = matrix( apply( StockMarket_Scenarios, 2, mean) ); +StockCov_Prices = cov( StockMarket_Scenarios ); +print( StockExp_Prices ); +print( StockCov_Prices ); + +################################################################################################################## +### Bond market projection to horizon and pricing +BondCurrent_Prices_Shifted = exp( -FlatCurve * u_minus_tau ); +BondCurrent_Prices = exp( -FlatCurve * TimesToMat ); + +# project bond market to horizon +N = length( TimesToMat ); # number of bonds + +# generate common source of randomness +U = runif( Num_Scenarios); +BondMarket_Scenarios = matrix( 0, Num_Scenarios, N ); +for( n in 1 : N ) +{ + # generate co-dependent changes in yield-to-maturity + DY_Scenarios = qnorm( U, mus[ n ] * tau / tau_tilde, sigmas[ n ] * sqrt( tau / tau_tilde ) ); + + # compute the horizon prices, (3.81) in "Risk and Asset Allocation" - Springer + X = -u_minus_tau[ n ] * DY_Scenarios; + BondMarket_Scenarios[ , n ] = BondCurrent_Prices_Shifted[ n ] * exp( X ); +} + +################################################################################################################## +### MV inputs + +# analytical +Exp_Hrzn_DY_Hat = mus * tau / tau_tilde; +SDev_Hrzn_DY_Hat = sigmas * sqrt(tau / tau_tilde); +Corr_Hrzn_DY_Hat = matrix( 1, N, N ); # full co-dependence +Cov_Hrzn_DY_Hat = diag(SDev_Hrzn_DY_Hat, length( SDev_Hrzn_DY_Hat)) %*% Corr_Hrzn_DY_Hat %*% diag(SDev_Hrzn_DY_Hat, length( SDev_Hrzn_DY_Hat)); +#[BondExp_Prices, BondCov_Prices] +CCY2P = ConvertChangeInYield2Price(Exp_Hrzn_DY_Hat, Cov_Hrzn_DY_Hat, u_minus_tau, BondCurrent_Prices_Shifted); +print( CCY2P$Exp_Prices ); +print( CCY2P$Cov_Prices ); + +# numerical +BondExp_Prices = matrix( apply( BondMarket_Scenarios,2, mean ) ); +BondCov_Prices = cov(BondMarket_Scenarios); +print(BondExp_Prices); +print(BondCov_Prices); + +################################################################################################################## +### Portolio optimization +# step 1: MV quadratic optimization to determine one-parameter frontier of quasi-optimal solutions ... +E = rbind( StockExp_Prices, BondExp_Prices[ 2 ] ); +S = blkdiag( StockCov_Prices, matrix( BondCov_Prices[ 2, 2] ) ); +Current_Prices = rbind( StockCurrent_Prices, BondCurrent_Prices[ 2 ] ); +Market_Scenarios = cbind( StockMarket_Scenarios, BondMarket_Scenarios[ , 2 ] ); + +NumPortf = 40; +# frontier with QP (no short-sales) +#[ExpectedValue, EFP$Std_Deviation, EFP$Composition] + +EFP = EfficientFrontierPrices( NumPortf, S, E,Current_Prices, Budget ); + +# step 2: ...evaluate satisfaction for all EFP$Composition on the frontier ... +Store_Satisfaction = NULL; +for( n in 1 : NumPortf ) +{ + Allocation = matrix( EFP$Composition[ n, ] ); + Objective_Scenario = Market_Scenarios %*% Allocation; + Utility = -exp( -1 / Zeta * Objective_Scenario ); + ExpU = apply( Utility, 2, mean ); + Satisfaction = -Zeta * log( -ExpU ); + Store_Satisfaction = cbind( Store_Satisfaction, Satisfaction ); ##ok +} + +# ... and pick the best +Optimal_Index = which.max( Store_Satisfaction ); +Optimal_Allocation = EFP$Composition[ Optimal_Index, ]; +print(Optimal_Allocation); + +################################################################################################################## +### Plots +dev.new() + +par(mfrow = c( 2, 1 ) ); +# rets MV frontier +h = plot( EFP$Std_Deviation, EFP$ExpectedValue, "l", lwd = 2, xlab = "st.dev. prices", ylab = "exp.val prices", + xlim = c( EFP$Std_Deviation[1], EFP$Std_Deviation[ length(EFP$Std_Deviation) ]), ylim = c( min(EFP$ExpectedValue), max(EFP$ExpectedValue) ) ); + + +# satisfaction as function of st.deviation on the frontier +h = plot(EFP$Std_Deviation, Store_Satisfaction, "l", lwd = 2, xlab = "st.dev. prices", ylab = "satisfaction", + xlim = c( EFP$Std_Deviation[1], EFP$Std_Deviation[ length(EFP$Std_Deviation) ]), ylim = c( min(Store_Satisfaction), max(Store_Satisfaction) ) ); + Added: pkg/Meucci/man/EfficientFrontierPrices.Rd =================================================================== --- pkg/Meucci/man/EfficientFrontierPrices.Rd (rev 0) +++ pkg/Meucci/man/EfficientFrontierPrices.Rd 2013-08-28 18:12:15 UTC (rev 2919) @@ -0,0 +1,43 @@ +\name{EfficientFrontierPrices} +\alias{EfficientFrontierPrices} +\title{Compute the mean-variance efficient frontier (on prices) by quadratic programming, as described in +A. Meucci "Risk and Asset Allocation", Springer, 2005} +\usage{ + EfficientFrontierPrices(NumPortf, Covariance, + ExpectedValues, Current_Prices, Budget) +} +\arguments{ + \item{NumPortf}{: [scalar] number of portfolio in the + efficient frontier} + + \item{Covariance}{: [matrix] (N x N) covariance matrix} + + \item{ExpectedValues}{: [vector] (N x 1) expected + returns} + + \item{Current_Prices}{: [vector] (N x 1) current prices} + + \item{Budget}{: [scalar] budget constraint} +} +\value{ + ExpectedValue : [vector] (NumPortf x 1) expected values + of the portfolios + + Std_Deviation : [vector] (NumPortf x 1) standard + deviations of the portfolios + + Composition : [matrix] (NumPortf x N) optimal portfolios +} +\description{ + Compute the mean-variance efficient frontier (on prices) + by quadratic programming, as described in A. Meucci "Risk + and Asset Allocation", Springer, 2005 +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "EfficientFrontierReturns.m" +} + From noreply at r-forge.r-project.org Thu Aug 29 02:21:39 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 02:21:39 +0200 (CEST) Subject: [Returnanalytics-commits] r2920 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . .Rproj.user .Rproj.user/E5D7D248 .Rproj.user/E5D7D248/pcs .Rproj.user/E5D7D248/sdb .Rproj.user/E5D7D248/sdb/per .Rproj.user/E5D7D248/sdb/prop noniid.sm noniid.sm/R noniid.sm/man noniid.sm.Rcheck noniid.sm.Rcheck/00_pkg_src noniid.sm.Rcheck/00_pkg_src/noniid.sm noniid.sm.Rcheck/00_pkg_src/noniid.sm/R noniid.sm.Rcheck/00_pkg_src/noniid.sm/man Message-ID: <20130829002139.8795B1855DC@r-forge.r-project.org> Author: shubhanm Date: 2013-08-29 02:21:38 +0200 (Thu, 29 Aug 2013) New Revision: 2920 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rhistory pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/build_options pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/ctx/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/files-pane.pper pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/source-pane.pper pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/windowlayoutstate.pper pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/workbench-pane.pper pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/persistent-state pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/per/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/per/t/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/per/u/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/13CBABD9 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/1AED4783 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/1C2E0E7A pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/24070AAD pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/27098A0 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/287CC508 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/2FCB34FB pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/3F182A3F pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/43489896 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/4F871881 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/501C707D pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/52DD307F pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/546A621A pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/5FF4F835 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/6B6D818D pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/71A7E464 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/73C76D98 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/7D213B54 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/85F409DB pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/87F2563F pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/899F8BD1 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/8E454FCF pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/95767693 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/9905A878 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/99F4D7A7 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A5107036 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A577C116 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A695C2A0 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A78B58AA pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/AF00C71A pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B17E17F1 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B3CFFA85 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B61C67B3 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/C8AAB8BD pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/D29C9CB4 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/D82CBDC4 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/E0A8BCED pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/E7FF3434 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/ED14616B pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/F5B939F0 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/FE844B95 pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/INDEX pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/ACStdDev.annualized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/CDrawdown.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/CalmarRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/EmaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/SterlingRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/chart.AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/chart.Autocorrelation.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/maxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/na.skip.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/noniid.sm-internal.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/se.LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/table.ComparitiveReturn.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/table.EMaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/R/table.UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/ACStdDev.annualized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/CalmarRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/Cdrawdown.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/EMaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/GLMSmoothIndex.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/LoSharpe.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/Return.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/SterlingRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/chart.AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/chart.Autocorrelation.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/noniid.sm-package.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/quad.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/se.LoSharpe.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/table.ComparitiveReturn.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/table.EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00_pkg_src/noniid.sm/man/table.UnsmoothReturn.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00check.log pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/00install.out pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/noniid.sm-Ex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/noniid.sm-Ex.Rout pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/noniid.sm-Ex.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/CDrawdown.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/CalmarRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/SterlingRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.Autocorrelation.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/man/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/maxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/na.skip.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/noniid.sm-internal.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.ComparitiveReturn.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.EMaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/Read-and-delete-me pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/inst/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/ACStdDev.annualized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/CalmarRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Cdrawdown.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/EMaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/GLMSmoothIndex.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/LoSharpe.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/SterlingRatio.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/chart.AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/chart.Autocorrelation.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/noniid.sm-package.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/quad.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/se.LoSharpe.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.ComparitiveReturn.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.UnsmoothReturn.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm_0.1.tar.gz pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm_1.0.tar.gz Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/Shubhankit.Rproj Log: nonidd.sm package version 1.01 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rhistory =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rhistory (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rhistory 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,48 @@ +devtools::load_all(".") +package.skeleton("noniid.sm") +R CMD check noniid.sm +library(noniid.sm) +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R') +library(PerformanceAnalytics) +data(edhec) +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R') +devtools::load_all("noniid.sm") +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R') +get("edhec") +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.Autocorrelation.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/CalmarRatio.Norm.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/CDrawdown.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/SterlingRatio.Norm.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.Autocorrelation.R') +roxygenize() +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/") +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R") +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm") +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R') +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R") +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm") +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R') +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm") +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/CDrawdown.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.ComparitiveReturn.GLM.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.UnsmoothReturn.R') +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm") +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm") +roxygenize("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm") +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R') +library(noniid.sm) Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/build_options =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/build_options (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/build_options 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,4 @@ +auto_roxygenize_for_build_and_reload="0" +auto_roxygenize_for_build_package="1" +auto_roxygenize_for_check="1" +makefile_args="" Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/files-pane.pper =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/files-pane.pper (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/files-pane.pper 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,9 @@ +{ + "path" : "C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit", + "sortOrder" : [ + { + "ascending" : true, + "columnIndex" : 2 + } + ] +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/source-pane.pper =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/source-pane.pper (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/source-pane.pper 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,3 @@ +{ + "activeTab" : 4 +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/windowlayoutstate.pper =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/windowlayoutstate.pper (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/windowlayoutstate.pper 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,14 @@ +{ + "left" : { + "panelheight" : 646, + "splitterpos" : 410, + "topwindowstate" : "NORMAL", + "windowheight" : 684 + }, + "right" : { + "panelheight" : 646, + "splitterpos" : 410, + "topwindowstate" : "MINIMIZE", + "windowheight" : 684 + } +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/workbench-pane.pper =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/workbench-pane.pper (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/pcs/workbench-pane.pper 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,4 @@ +{ + "TabSet1" : 2, + "TabSet2" : 3 +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/persistent-state =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/persistent-state (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/persistent-state 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,9 @@ +build-last-errors="[]" +build-last-errors-base-dir="C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/" +build-last-outputs="[{\"output\":\"==> Rcmd.exe INSTALL --no-multiarch noniid.sm\\n\\n\",\"type\":0},{\"output\":\"* installing to library 'C:/Users/shubhankit/Documents/R/win-library/3.0'\\r\\n\",\"type\":1},{\"output\":\"* installing *source* package 'noniid.sm' ...\\r\\n\",\"type\":1},{\"output\":\"\",\"type\":1},{\"output\":\"** R\\r\\n\",\"type\":1},{\"output\":\"\",\"type\":1},{\"output\":\"** preparing package for lazy loading\\r\\n\",\"type\":1},{\"output\":\"\",\"type\":1},{\"output\":\"** help\\r\\n\",\"type\":1},{\"output\":\"\",\"type\":1},{\"output\":\"Warning: C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/noniid.sm-package.Rd:32: All text must be in a section\\r\\n\",\"type\":2},{\"output\":\"Warning: C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/noniid.sm-package.Rd:33: All text must be in a section\\r\\n\",\"type\":2},{\"output\":\"\",\"type\":1},{\"output\":\"*** installing help indices\\r\\n\",\"type\":1},{\"output\":\"\",\"type\":1},{\"output\":\"** building package indices\\r\\n\",\"type\":1},{\"output\":\"\",\"type\":1},{\"output\":\"** testing if installed package can be loaded\\r\\n\",\"type\":1},{\"output\":\"\",\"type\":1},{\"output\":\"* DONE (noniid.sm)\\r\\n\",\"type\":1},{\"output\":\"\",\"type\":1}]" +compile_pdf_state="{\"errors\":[],\"output\":\"\",\"running\":false,\"tab_visible\":false,\"target_file\":\"\"}" +console_procs="[]" +files.monitored-path="" +find-in-files-state="{\"handle\":\"\",\"input\":\"\",\"path\":\"\",\"regex\":true,\"results\":{\"file\":[],\"line\":[],\"lineValue\":[],\"matchOff\":[],\"matchOn\":[]},\"running\":false}" +imageDirtyState="1" +saveActionState="1" Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/13CBABD9 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/13CBABD9 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/13CBABD9 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/1AED4783 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/1AED4783 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/1AED4783 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/1C2E0E7A =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/1C2E0E7A (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/1C2E0E7A 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/24070AAD =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/24070AAD (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/24070AAD 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/27098A0 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/27098A0 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/27098A0 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/287CC508 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/287CC508 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/287CC508 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/2FCB34FB =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/2FCB34FB (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/2FCB34FB 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/3F182A3F =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/3F182A3F (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/3F182A3F 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/43489896 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/43489896 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/43489896 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/4F871881 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/4F871881 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/4F871881 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/501C707D =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/501C707D (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/501C707D 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/52DD307F =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/52DD307F (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/52DD307F 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/546A621A =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/546A621A (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/546A621A 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,3 @@ +{ + "tempName" : "Untitled2" +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/5FF4F835 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/5FF4F835 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/5FF4F835 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/6B6D818D =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/6B6D818D (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/6B6D818D 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/71A7E464 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/71A7E464 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/71A7E464 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/73C76D98 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/73C76D98 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/73C76D98 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/7D213B54 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/7D213B54 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/7D213B54 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/85F409DB =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/85F409DB (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/85F409DB 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/87F2563F =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/87F2563F (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/87F2563F 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/899F8BD1 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/899F8BD1 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/899F8BD1 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/8E454FCF =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/8E454FCF (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/8E454FCF 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/95767693 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/95767693 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/95767693 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/9905A878 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/9905A878 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/9905A878 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/99F4D7A7 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/99F4D7A7 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/99F4D7A7 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A5107036 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A5107036 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A5107036 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A577C116 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A577C116 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A577C116 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A695C2A0 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A695C2A0 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A695C2A0 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A78B58AA =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A78B58AA (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/A78B58AA 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/AF00C71A =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/AF00C71A (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/AF00C71A 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B17E17F1 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B17E17F1 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B17E17F1 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B3CFFA85 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B3CFFA85 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B3CFFA85 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,3 @@ +{ + "tempName" : "Untitled2" +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B61C67B3 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B61C67B3 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/B61C67B3 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/C8AAB8BD =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/C8AAB8BD (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/C8AAB8BD 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,3 @@ +{ + "tempName" : "Untitled2" +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/D29C9CB4 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/D29C9CB4 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/D29C9CB4 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/D82CBDC4 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/D82CBDC4 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/D82CBDC4 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/E0A8BCED =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/E0A8BCED (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/E0A8BCED 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,3 @@ +{ + "tempName" : "Untitled2" +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/E7FF3434 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/E7FF3434 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/E7FF3434 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/ED14616B =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/ED14616B (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/ED14616B 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/F5B939F0 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/F5B939F0 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/F5B939F0 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/FE844B95 =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/FE844B95 (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/FE844B95 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,2 @@ +{ +} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/INDEX =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/INDEX (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/sdb/prop/INDEX 2013-08-29 00:21:38 UTC (rev 2920) @@ -0,0 +1,41 @@ +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2FREADME="5FF4F835" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2FDESCRIPTION="A5107036" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2FR%2FAcarSim.R="899F8BD1" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2FR%2FCDD.Opt.R="52DD307F" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2FR%2FCalmarRatio.Normalized.R="8E454FCF" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2FR%2FLoSharpe.R="13CBABD9" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2FR%2Fchart.AcarSim.R="E0A8BCED" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2FR%2Fse.LoSharpe.R="B3CFFA85" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2FR%2Ftable.normDD.R="73C76D98" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fman%2FCalmarRatio.normalized.Rd="A577C116" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FDESCRIPTION="1C2E0E7A" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FACStdDev.annualized.R="6B6D818D" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FAcarSim.R="FE844B95" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FCDrawdown.R="A78B58AA" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FCalmarRatio.Norm.R="87F2563F" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FEmaxDDGBM.R="E7FF3434" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FGLMSmoothIndex.R="99F4D7A7" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FLoSharpe.R="27098A0" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FReturn.GLM.R="85F409DB" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FReturn.Okunev.R="ED14616B" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2FSterlingRatio.Norm.R="43489896" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2Fchart.Autocorrelation.R="71A7E464" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2Fnoniid.sm-internal.R="B17E17F1" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2Fse.LoSharpe.R="A695C2A0" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2Ftable.ComparitiveReturn.GLM.R="4F871881" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2Ftable.EMaxDDGBM.R="287CC508" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2FR%2Ftable.UnsmoothReturn.R="1AED4783" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fnoniid.sm%2Fman%2Fquad.Rd="501C707D" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fvignettes%2FACFSTDEV.rnw="9905A878" +C%3A%2FUsers%2Fshubhankit%2FDesktop%2FAgain%2Fpkg%2FPerformanceAnalytics%2Fsandbox%2FShubhankit%2Fvignettes%2FCheklov.CDDOpt="546A621A" [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2920 From noreply at r-forge.r-project.org Thu Aug 29 03:30:00 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 03:30:00 +0200 (CEST) Subject: [Returnanalytics-commits] r2921 - in pkg/PortfolioAnalytics: R sandbox Message-ID: <20130829013000.7FBB9185113@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-29 03:29:57 +0200 (Thu, 29 Aug 2013) New Revision: 2921 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/R/extract.efficient.frontier.R pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R Log: Renaming the class of the actual frontier data in efficient.frontier objects. Omitting the hardcoded pch arg in calls to plot for efficient frontier charts. Extending examples for testing_efficient_frontier file. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 00:21:38 UTC (rev 2920) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 01:29:57 UTC (rev 2921) @@ -120,7 +120,7 @@ } # plot a scatter of the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, pch=5, axes=FALSE, ...) + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) # plot the efficient line lines(x=x.f, y=y.f, col="darkgray", lwd=2) @@ -205,7 +205,7 @@ } # plot a scatter of the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, pch=5, axes=FALSE, ...) + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) # plot the efficient line lines(x=x.f, y=y.f, col="darkgray", lwd=2) @@ -403,7 +403,7 @@ } # plot the efficient frontier line - plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, pch=5, axes=FALSE, ...) + plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) if(chart.assets){ # risk-return scatter of the assets points(x=asset_risk, y=asset_ret) Modified: pkg/PortfolioAnalytics/R/extract.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-29 00:21:38 UTC (rev 2920) +++ pkg/PortfolioAnalytics/R/extract.efficient.frontier.R 2013-08-29 01:29:57 UTC (rev 2921) @@ -92,7 +92,7 @@ } # combine the stats from the optimal portfolio to result matrix result <- rbind(opt, result) - return(structure(result, class="efficient.frontier")) + return(structure(result, class="frontier")) } #' Generate the efficient frontier for a mean-variance portfolio @@ -174,7 +174,7 @@ extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) } colnames(out) <- names(stats) - return(structure(out, class="efficient.frontier")) + return(structure(out, class="frontier")) } #' Generate the efficient frontier for a mean-etl portfolio @@ -258,7 +258,7 @@ extractStats(optimize.portfolio(R=R, portfolio=portfolio, optimize_method="ROI")) } colnames(out) <- names(stats) - return(structure(out, class="efficient.frontier")) + return(structure(out, class="frontier")) } #' create an efficient frontier @@ -415,6 +415,6 @@ return(structure(list(call=call, frontier=frontier, R=R, - portfolio=portfolio), class="efficient.frontier")) + portfolio=portf), class="efficient.frontier")) } Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-29 00:21:38 UTC (rev 2920) +++ pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-29 01:29:57 UTC (rev 2921) @@ -38,27 +38,48 @@ # mean-var efficient frontier meanvar.ef <- create.EfficientFrontier(R=R, portfolio=meanvar.portf, type="mean-StdDev") print(meanvar.ef) -summary(meanvar.ef) -chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="b") -chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", rf=0) +summary(meanvar.ef, digits=2) +meanvar.ef$frontier +# The RAR.text argument can be used for the risk-adjusted-return name on the legend, +# by default it is 'Modified Sharpe Ratio' +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="b", RAR.text="Sharpe Ratio") +# The tangency portfolio and line are plotted by default, these can be ommitted +# by setting rf=NULL +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", rf=NULL) chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="StdDev") # run optimize.portfolio and chart the efficient frontier for that object opt_meanvar <- optimize.portfolio(R=R, portfolio=meanvar.portf, optimize_method="ROI", trace=TRUE) -chart.EfficientFrontier(opt_meanvar, match.col="StdDev", n.portfolios=50) + +# The efficient frontier is created from the 'opt_meanvar' object by getting +# The portfolio and returns objects and then passing those to create.EfficientFrontier +chart.EfficientFrontier(opt_meanvar, match.col="StdDev", n.portfolios=25) + +# Rerun the optimization with a new risk aversion parameter to change where the +# portfolio is along the efficient frontier. The 'optimal' portfolio plotted on +# the efficient frontier is the optimal portfolio returned by optimize.portfolio. +meanvar.portf$objectives[[2]]$risk_aversion=0.25 +opt_meanvar <- optimize.portfolio(R=R, portfolio=meanvar.portf, optimize_method="ROI", trace=TRUE) +chart.EfficientFrontier(opt_meanvar, match.col="StdDev", n.portfolios=25) + # The weights along the efficient frontier can be plotted by passing in the # optimize.portfolio output object chart.Weights.EF(opt_meanvar, match.col="StdDev") -# or we can extract the efficient frontier and then plot it + +# Extract the efficient frontier and then plot it +# Note that if you want to do multiple charts of the efficient frontier from +# the optimize.portfolio object, it is best to extractEfficientFrontier as shown +# below ef <- extractEfficientFrontier(object=opt_meanvar, match.col="StdDev", n.portfolios=15) print(ef) +summary(ef, digits=5) chart.Weights.EF(ef, match.col="StdDev", colorset=bluemono) # mean-etl efficient frontier meanetl.ef <- create.EfficientFrontier(R=R, portfolio=meanetl.portf, type="mean-ES") print(meanetl.ef) summary(meanetl.ef) -chart.EfficientFrontier(meanetl.ef, match.col="ES", main="mean-ETL Efficient Frontier", type="l", col="blue") +chart.EfficientFrontier(meanetl.ef, match.col="ES", main="mean-ETL Efficient Frontier", type="l", col="blue", RAR.text="STARR") chart.Weights.EF(meanetl.ef, colorset=bluemono, match.col="ES") # mean-etl efficient frontier using random portfolios @@ -86,7 +107,8 @@ # group constraints (also add long only constraints to the group portfolio) group.portf <- add.constraint(portfolio=init.portf, type="group", - groups=c(2, 3), + groups=list(groupA=c(1, 3), + groupB=c(2, 4, 5)), group_min=c(0.25, 0.15), group_max=c(0.75, 0.55)) group.portf <- add.constraint(portfolio=group.portf, type="long_only") From noreply at r-forge.r-project.org Thu Aug 29 04:27:05 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 04:27:05 +0200 (CEST) Subject: [Returnanalytics-commits] r2922 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: . R Message-ID: <20130829022705.38EFC18591B@r-forge.r-project.org> Author: shubhanm Date: 2013-08-29 04:27:01 +0200 (Thu, 29 Aug 2013) New Revision: 2922 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/inst/ Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION Log: noniid.sm package Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION 2013-08-29 01:29:57 UTC (rev 2921) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION 2013-08-29 02:27:01 UTC (rev 2922) @@ -35,3 +35,5 @@ 'table.UnsmoothReturn.R' 'UnsmoothReturn.R' 'EmaxDDGBM.R' + 'table.EMaxDDGBM.R' + Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R 2013-08-29 02:27:01 UTC (rev 2922) @@ -0,0 +1,207 @@ +#' @title Expected Drawdown using Brownian Motion Assumptions +#' +#' @description Works on the model specified by Maddon-Ismail which investigates the behavior of this statistic for a Brownian motion +#' with drift. +#' @details If X(t) is a random process on [0, T ], the maximum drawdown at time T , D(T), is defined by +#' where \deqn{D(T) = sup [X(s) - X(t)]} where s belongs to [0,t] and s belongs to [0,T] +#'Informally, this is the largest drop from a peak to a bottom. In this paper, we investigate the +#'behavior of this statistic for a Brownian motion with drift. In particular, we give an infinite +#'series representation of its distribution, and consider its expected value. When the drift is zero, +#'we give an analytic expression for the expected value, and for non-zero drift, we give an infinite +#'series representation. For all cases, we compute the limiting \bold{(\eqn{T tends to \infty})} behavior, which can be +#'logarithmic (\eqn{\mu} > 0), square root (\eqn{\mu} = 0), or linear (\eqn{\mu} < 0). +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#' @param digits significant number +#' @author Shubhankit Mohan +#' @keywords Expected Drawdown Using Brownian Motion Assumptions +#' @references Magdon-Ismail, M., Atiya, A., Pratap, A., and Yaser S. Abu-Mostafa: On the Maximum Drawdown of a Browninan Motion, Journal of Applied Probability 41, pp. 147-161, 2004 \url{http://alumnus.caltech.edu/~amir/drawdown-jrnl.pdf} +#' @keywords Drawdown models Brownian Motion Assumptions +#' @examples +#' +#'library(PerformanceAnalytics) +#' data(edhec) +#' table.EmaxDDGBM(edhec) +#' @rdname table.EmaxDDGBM +#' @export +table.EMaxDDGBM <- + function (R,digits =4) + {# @author + + # DESCRIPTION: + # Downside Risk Summary: Statistics and Stylized Facts + + # Inputs: + # R: a regular timeseries of returns (rather than prices) + # Output: Table of Estimated Drawdowns + + y = checkData(R, method = "xts") + columns = ncol(y) + rows = nrow(y) + columnnames = colnames(y) + rownames = rownames(y) + T= nyears(y); + + # for each column, do the following: + for(column in 1:columns) { + x = y[,column] + mu = Return.annualized(x, scale = NA, geometric = TRUE) + sig=StdDev(x) + gamma<-sqrt(pi/8) + + if(mu==0){ + + Ed<-2*gamma*sig*sqrt(T) + + } + + else{ + + alpha<-mu*sqrt(T/(2*sig^2)) + + x<-alpha^2 + + if(mu>0){ + + mQp<-matrix(c( + + 0.0005, 0.0010, 0.0015, 0.0020, 0.0025, 0.0050, 0.0075, 0.0100, 0.0125, + + 0.0150, 0.0175, 0.0200, 0.0225, 0.0250, 0.0275, 0.0300, 0.0325, 0.0350, + + 0.0375, 0.0400, 0.0425, 0.0450, 0.0500, 0.0600, 0.0700, 0.0800, 0.0900, + + 0.1000, 0.2000, 0.3000, 0.4000, 0.5000, 1.5000, 2.5000, 3.5000, 4.5000, + + 10, 20, 30, 40, 50, 150, 250, 350, 450, 1000, 2000, 3000, 4000, 5000, 0.019690, + + 0.027694, 0.033789, 0.038896, 0.043372, 0.060721, 0.073808, 0.084693, 0.094171, + + 0.102651, 0.110375, 0.117503, 0.124142, 0.130374, 0.136259, 0.141842, 0.147162, + + 0.152249, 0.157127, 0.161817, 0.166337, 0.170702, 0.179015, 0.194248, 0.207999, + + 0.220581, 0.232212, 0.243050, 0.325071, 0.382016, 0.426452, 0.463159, 0.668992, + + 0.775976, 0.849298, 0.905305, 1.088998, 1.253794, 1.351794, 1.421860, 1.476457, + + 1.747485, 1.874323, 1.958037, 2.020630, 2.219765, 2.392826, 2.494109, 2.565985, + + 2.621743),ncol=2) + + + + if(x<0.0005){ + + Qp<-gamma*sqrt(2*x) + + } + + if(x>0.0005 & x<5000){ + + Qp<-spline(log(mQp[,1]),mQp[,2],n=1,xmin=log(x),xmax=log(x))$y + + } + + if(x>5000){ + + Qp<-0.25*log(x)+0.49088 + + } + + Ed<-(2*sig^2/mu)*Qp + + } + + if(mu<0){ + + mQn<-matrix(c( + + 0.0005, 0.0010, 0.0015, 0.0020, 0.0025, 0.0050, 0.0075, 0.0100, 0.0125, 0.0150, + + 0.0175, 0.0200, 0.0225, 0.0250, 0.0275, 0.0300, 0.0325, 0.0350, 0.0375, 0.0400, + + 0.0425, 0.0450, 0.0475, 0.0500, 0.0550, 0.0600, 0.0650, 0.0700, 0.0750, 0.0800, + + 0.0850, 0.0900, 0.0950, 0.1000, 0.1500, 0.2000, 0.2500, 0.3000, 0.3500, 0.4000, + + 0.5000, 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000, 4.5000, 5.0000, + + 0.019965, 0.028394, 0.034874, 0.040369, 0.045256, 0.064633, 0.079746, 0.092708, + + 0.104259, 0.114814, 0.124608, 0.133772, 0.142429, 0.150739, 0.158565, 0.166229, + + 0.173756, 0.180793, 0.187739, 0.194489, 0.201094, 0.207572, 0.213877, 0.220056, + + 0.231797, 0.243374, 0.254585, 0.265472, 0.276070, 0.286406, 0.296507, 0.306393, + + 0.316066, 0.325586, 0.413136, 0.491599, 0.564333, 0.633007, 0.698849, 0.762455, + + 0.884593, 1.445520, 1.970740, 2.483960, 2.990940, 3.492520, 3.995190, 4.492380, + + 4.990430, 5.498820),ncol=2) + + + + + + if(x<0.0005){ + + Qn<-gamma*sqrt(2*x) + + } + + if(x>0.0005 & x<5000){ + + Qn<-spline(mQn[,1],mQn[,2],n=1,xmin=x,xmax=x)$y + + } + + if(x>5000){ + + Qn<-x+0.50 + + } + + Ed<-(2*sig^2/mu)*(-Qn) + + } + + } + + # return(Ed) + + z = c((mu*100), + (sig*100), + (Ed*100)) + znames = c( + "Annual Returns in %", + "Std Devetions in %", + "Expected Drawdown in %" + ) + if(column == 1) { + resultingtable = data.frame(Value = z, row.names = znames) + } + else { + nextcolumn = data.frame(Value = z, row.names = znames) + resultingtable = cbind(resultingtable, nextcolumn) + } + } + colnames(resultingtable) = columnnames + ans = base::round(resultingtable, digits) + ans + + + } + +############################################################################### +################################################################################ +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: EmaxDDGBM.R 2271 2012-09-02 01:56:23Z braverock $ +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R 2013-08-29 02:27:01 UTC (rev 2922) @@ -0,0 +1,74 @@ +#'@title GLM Index +#'@description +#'Getmansky Lo Markov Smoothing Index is a useful summary statistic for measuring the concentration of weights is +#' a sum of square of Moving Average lag coefficient. +#' This measure is well known in the industrial organization literature as the +#' \bold{ Herfindahl index}, a measure of the concentration of firms in a given industry. +#' The index is maximized when one coefficient is 1 and the rest are 0. In the context of +#'smoothed returns, a lower value implies more smoothing, and the upper bound +#'of 1 implies no smoothing, hence \eqn{\xi} is reffered as a '\bold{smoothingindex}'. +#'\deqn{ \xi = \sum\theta(j)^2} +#'Where j belongs to 0 to k,which is the number of lag factors input. +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @aliases Return.Geltner +#' @references \emph{Getmansky, Mila, Lo, Andrew W. and Makarov, Igor} An Econometric Model of Serial Correlation and Illiquidity in Hedge Fund Returns (March 1, 2003). MIT Sloan Working Paper No. 4288-03; MIT Laboratory for Financial Engineering Working Paper No. LFE-1041A-03; EFMA 2003 Helsinki Meetings. Available at SSRN: \url{http://ssrn.com/abstract=384700} +#' +#' @keywords ts multivariate distribution models non-iid +#' @examples +#' +#' data(edhec) +#' head(GLMSmoothIndex(edhec)) +#' +#' @export +GLMSmoothIndex<- + function(R = NULL) + { + columns = 1 + columnnames = NULL + #Error handling if R is not NULL + if(!is.null(R)){ + x = checkData(R) + columns = ncol(x) + n = nrow(x) + count = q + x=edhec + columns = ncol(x) + columnnames = colnames(x) + + # Calculate AutoCorrelation Coefficient + for(column in 1:columns) { # for each asset passed in as R + y = checkData(x[,column], method="vector", na.rm = TRUE) + sum = sum(abs(acf(y,plot=FALSE,lag.max=6)[[1]][2:7])); + acflag6 = acf(y,plot=FALSE,lag.max=6)[[1]][2:7]/sum; + values = sum(acflag6*acflag6) + + if(column == 1) { + result.df = data.frame(Value = values) + colnames(result.df) = columnnames[column] + } + else { + nextcol = data.frame(Value = values) + colnames(nextcol) = columnnames[column] + result.df = cbind(result.df, nextcol) + } + } + rownames(result.df)= paste("GLM Smooth Index") + + return(result.df) + + } + } + +############################################################################### +# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# +# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# +# This R package is distributed under the terms of the GNU Public License (GPL) +# for full details see the file COPYING +# +# $Id: GLMSmoothIndex.R 2163 2012-07-16 00:30:19Z braverock $ +# +############################################################################### Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R 2013-08-29 02:27:01 UTC (rev 2922) @@ -0,0 +1,91 @@ +#'@title Andrew Lo Sharpe Ratio +#'@description +#' Although the Sharpe ratio has become part of the canon of modern financial +#' analysis, its applications typically do not account for the fact that it is an +#' estimated quantity, subject to estimation errors that can be substantial in +#' some cases. +#' +#' Many studies have documented various violations of the assumption of +#' IID returns for financial securities. +#' +#' Under the assumption of stationarity,a version of the Central Limit Theorem can +#' still be applied to the estimator . +#' @details +#' The relationship between SR and SR(q) is somewhat more involved for non- +#'IID returns because the variance of Rt(q) is not just the sum of the variances of component returns but also includes all the covariances. Specifically, under +#' the assumption that returns \eqn{R_t} are stationary, +#' \deqn{ Var[(R_t)] = \sum \sum Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum (q-k)\rho(k) } +#' Where \eqn{ \rho(k) = Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order autocorrelation coefficient of the series of returns.This yields the following relationship between SR and SR(q): +#' and i,j belongs to 0 to q-1 +#'\deqn{SR(q) = \eta(q) } +#'Where : +#' \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 \sum(q-k)\rho(k)] } +#' Where k belongs to 0 to q-1 +#' @param Ra an xts, vector, matrix, data frame, timeSeries or zoo object of +#' daily asset returns +#' @param Rf an xts, vector, matrix, data frame, timeSeries or zoo object of +#' annualized Risk Free Rate +#' @param q Number of autocorrelated lag periods. Taken as 3 (Default) +#' @param \dots any other pass thru parameters +#' @author Brian G. Peterson, Peter Carl, Shubhankit Mohan +#' @references Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ An Econometric Model of Serial Correlation and Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT Sloan Working Paper No. 4288-03; MIT Laboratory for Financial Engineering Working Paper No. LFE-1041A-03; EFMA 2003 Helsinki Meetings. +#' \url{http://ssrn.com/abstract=384700} +#' @keywords ts multivariate distribution models non-iid +#' @examples +#' +#' data(edhec) +#' head(LoSharpe(edhec,0,3) +#' @rdname LoSharpe +#' @export +LoSharpe <- + function (Ra,Rf = 0,q = 3, ...) + { # @author Brian G. Peterson, Peter Carl + + + # Function: + R = checkData(Ra, method="xts") + # Get dimensions and labels + columns.a = ncol(R) + columnnames.a = colnames(R) + # Time used for daily Return manipulations + Time= 252*nyears(edhec) + clean.lo <- function(column.R,q) { + # compute the lagged return series + gamma.k =matrix(0,q) + mu = sum(column.R)/(Time) + Rf= Rf/(Time) + for(i in 1:q){ + lagR = lag(column.R, k=i) + # compute the Momentum Lagged Values + gamma.k[i]= (sum(((column.R-mu)*(lagR-mu)),na.rm=TRUE)) + } + return(gamma.k) + } + neta.lo <- function(pho.k,q) { + # compute the lagged return series + sumq = 0 + for(j in 1:q){ + sumq = sumq+ (q-j)*pho.k[j] + } + return(q/(sqrt(q+2*sumq))) + } + for(column.a in 1:columns.a) { # for each asset passed in as R + # clean the data and get rid of NAs + mu = sum(R[,column.a])/(Time) + sig=sqrt(((R[,column.a]-mu)^2/(Time))) + pho.k = clean.lo(R[,column.a],q)/(as.numeric(sig[1])) + netaq=neta.lo(pho.k,q) + column.lo = (netaq*((mu-Rf)/as.numeric(sig[1]))) + + if(column.a == 1) { lo = column.lo } + else { lo = cbind (lo, column.lo) } + + } + colnames(lo) = columnnames.a + rownames(lo)= paste("Lo Sharpe Ratio") + return(lo) + + + # RESULTS: + + } From noreply at r-forge.r-project.org Thu Aug 29 07:03:36 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 07:03:36 +0200 (CEST) Subject: [Returnanalytics-commits] r2923 - in pkg/PortfolioAnalytics: R man sandbox Message-ID: <20130829050336.2DA1018543D@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-29 07:03:35 +0200 (Thu, 29 Aug 2013) New Revision: 2923 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R Log: Modifying graphical parameters for efficient frontier plots. Adding example to testing_efficient_frontier. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 02:27:01 UTC (rev 2922) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 05:03:35 UTC (rev 2923) @@ -47,7 +47,7 @@ #' @param rf risk free rate. If \code{rf} is not null, the maximum Sharpe Ratio or modified Sharpe Ratio tangency portfolio will be plotted #' @param cex.legend A numerical value giving the amount by which the legend should be magnified relative to the default. #' @param RAR.text Risk Adjusted Return ratio text to plot in the legend -#' @param chart.assets TRUE/FALSE to include risk-return scatter of assets +#' @param asset.names TRUE/FALSE to include the asset names in the plot #' @author Ross Bennett #' @export chart.EfficientFrontier <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ @@ -56,7 +56,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, cex.legend=0.8){ +chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, cex.legend=0.8, asset.names=TRUE){ if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class optimize.portfolio.ROI") portf <- object$portfolio @@ -98,7 +98,7 @@ } if(match.col == "StdDev"){ frontier <- meanvar.efficient.frontier(portfolio=portf, R=R, n.portfolios=n.portfolios) - rar <- "Sharpe Ratio" + rar <- "SR" } # data points to plot the frontier x.f <- frontier[, match.col] @@ -114,14 +114,18 @@ # set the x and y limits if(is.null(xlim)){ xlim <- range(c(x.f, asset_risk)) + xlim[1] <- xlim[1] * 0.8 + xlim[2] <- xlim[2] * 1.15 } if(is.null(ylim)){ ylim <- range(c(y.f, asset_ret)) + ylim[1] <- ylim[1] * 0.9 + ylim[2] <- ylim[2] * 1.1 } # plot a scatter of the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) - text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) # plot the efficient line lines(x=x.f, y=y.f, col="darkgray", lwd=2) # plot the optimal portfolio @@ -143,7 +147,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="Modified Sharpe", rf=0, cex.legend=0.8){ +chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, cex.legend=0.8, asset.names=TRUE){ # This function will work with objects of class optimize.portfolio.DEoptim, # optimize.portfolio.random, and optimize.portfolio.pso @@ -199,14 +203,18 @@ # set the x and y limits if(is.null(xlim)){ xlim <- range(c(x.f, asset_risk)) + xlim[1] <- xlim[1] * 0.8 + xlim[2] <- xlim[2] * 1.15 } if(is.null(ylim)){ ylim <- range(c(y.f, asset_ret)) + ylim[1] <- ylim[1] * 0.9 + ylim[2] <- ylim[2] * 1.1 } # plot a scatter of the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) - text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) # plot the efficient line lines(x=x.f, y=y.f, col="darkgray", lwd=2) # plot the optimal portfolio @@ -243,6 +251,7 @@ #' @param cex.legend The magnification to be used for sizing the legend relative to the current setting of 'cex', similar to \code{\link{plot}}. #' @param legend.labels character vector to use for the legend labels #' @param element.color provides the color for drawing less-important chart elements, such as the box lines, axis lines, etc. +#' @param legend.loc NULL, "topright", "right", or "bottomright". If legend.loc is NULL, the legend will not be plotted. #' @author Ross Bennett #' @export chart.Weights.EF <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray"){ @@ -251,7 +260,7 @@ #' @rdname chart.Weights.EF #' @export -chart.Weights.EF.efficient.frontier <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray"){ +chart.Weights.EF.efficient.frontier <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray", legend.loc="topright"){ # using ideas from weightsPlot.R in fPortfolio package if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") @@ -299,7 +308,11 @@ dim <- dim(wts) range <- dim[1] xmin <- 0 - xmax <- range + 0.2 * range + if(is.null(legend.loc)){ + xmax <- range + } else { + xmax <- range + 0.3 * range + } # set the colorset if no colorset is passed in if(is.null(colorset)) @@ -309,14 +322,17 @@ barplot(t(pos.weights), col = colorset, space = 0, ylab = "", xlim = c(xmin, xmax), ylim = c(ymin, ymax), border = element.color, cex.axis=cex.axis, - axisnames=FALSE,...) + axisnames=FALSE, ...) - # set the legend information - if(is.null(legend.labels)){ - legend.labels <- gsub(pattern="^w\\.", replacement="", cnames[wts_idx]) + if(!is.null(legend.loc)){ + if(legend.loc %in% c("topright", "right", "bottomright")){ + # set the legend information + if(is.null(legend.labels)){ + legend.labels <- gsub(pattern="^w\\.", replacement="", cnames[wts_idx]) + } + legend(legend.loc, legend = legend.labels, bty = "n", cex = cex.legend, fill = colorset) + } } - legend("topright", legend = legend.labels, bty = "n", cex = cex.legend, fill = colorset) - # plot the negative weights barplot(t(neg.weights), col = colorset, space = 0, add = TRUE, border = element.color, cex.axis=cex.axis, axes=FALSE, axisnames=FALSE, ...) @@ -333,17 +349,18 @@ axis(1, at = M, labels = signif(ef.return[M], 3), cex.axis=cex.axis) # axis labels and titles - mtext("Risk", side = 3, line = 2, adj = 1, cex = cex.lab) - mtext("Return", side = 1, line = 2, adj = 1, cex = cex.lab) + mtext(match.col, side = 3, line = 2, adj = 0.5, cex = cex.lab) + mtext("Mean", side = 1, line = 2, adj = 0.5, cex = cex.lab) mtext("Weight", side = 2, line = 2, adj = 1, cex = cex.lab) # add title - mtext(main, adj = 0, line = 2.5, font = 2, cex = 0.8) + title(main=main, line=3) + # mtext(main, adj = 0, line = 2.5, font = 2, cex = 0.8) box(col=element.color) } #' @rdname chart.Weights.EF #' @export -chart.Weights.EF.optimize.portfolio <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray"){ +chart.Weights.EF.optimize.portfolio <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray", legend.loc="topright"){ # chart the weights along the efficient frontier of an objected created by optimize.portfolio if(!inherits(object, "optimize.portfolio")) stop("object must be of class optimize.portfolio") @@ -352,12 +369,13 @@ PortfolioAnalytics:::chart.Weights.EF(object=frontier, colorset=colorset, ..., match.col=match.col, main=main, cex.lab=cex.lab, cex.axis=cex.axis, cex.legend=cex.legend, - legend.labels=legend.labels, element.color=element.color) + legend.labels=legend.labels, element.color=element.color, + legend.loc=legend.loc) } #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="Modified Sharpe", rf=0, chart.assets=TRUE, cex.legend=0.8){ +chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, cex.legend=0.8, asset.names=TRUE){ if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") # get the returns and efficient frontier object @@ -381,20 +399,22 @@ } if(is.na(mtc)) stop("could not match match.col with column name of efficient frontier") - if(chart.assets){ - # get the data to plot scatter of asset returns - asset_ret <- scatterFUN(R=R, FUN="mean") - asset_risk <- scatterFUN(R=R, FUN=match.col) - rnames <- colnames(R) - - # set the x and y limits - if(is.null(xlim)){ - xlim <- range(c(frontier[, mtc], asset_risk)) - } - if(is.null(ylim)){ - ylim <- range(c(frontier[, mean.mtc], asset_ret)) - } + # get the data to plot scatter of asset returns + asset_ret <- scatterFUN(R=R, FUN="mean") + asset_risk <- scatterFUN(R=R, FUN=match.col) + rnames <- colnames(R) + + # set the x and y limits + if(is.null(xlim)){ + xlim <- range(c(frontier[, mtc], asset_risk)) + xlim[1] <- xlim[1] * 0.8 + xlim[2] <- xlim[2] * 1.15 } + if(is.null(ylim)){ + ylim <- range(c(frontier[, mean.mtc], asset_ret)) + ylim[1] <- ylim[1] * 0.9 + ylim[2] <- ylim[2] * 1.1 + } if(!is.null(rf)){ sr <- (frontier[, mean.mtc] - rf) / (frontier[, mtc]) @@ -403,12 +423,12 @@ } # plot the efficient frontier line - plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) - if(chart.assets){ - # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret) - text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) - } + plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret) + if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + if(!is.null(rf)){ # Plot tangency line and points at risk-free rate and tangency portfolio abline(rf, srmax, lty=2) @@ -444,9 +464,10 @@ #' @param xlim set the x-axis limit, same as in \code{\link{plot}} #' @param ylim set the y-axis limit, same as in \code{\link{plot}} #' @param ... passthrough parameters to \code{\link{plot}} +#' @param asset.names TRUE/FALSE to include the asset names in the plot #' @author Ross Bennett #' @export -chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ...){ +chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ..., asset.names=TRUE){ # create multiple efficient frontier objects (one per portfolio in portfolio_list) if(!is.list(portfolio_list)) stop("portfolio_list must be passed in as a list") if(length(portfolio_list) == 1) warning("Only one portfolio object in portfolio_list") @@ -460,14 +481,27 @@ asset_ret <- scatterFUN(R=R, FUN="mean") asset_risk <- scatterFUN(R=R, FUN=match.col) rnames <- colnames(R) + + # set the x and y limits + if(is.null(xlim)){ + xlim <- range(asset_risk) + xlim[1] <- xlim[1] * 0.8 + xlim[2] <- xlim[2] * 1.15 + } + if(is.null(ylim)){ + ylim <- range(asset_ret) + ylim[1] <- ylim[1] * 0.9 + ylim[2] <- ylim[2] * 1.1 + } + # plot the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) # risk-return scatter of the assets points(x=asset_risk, y=asset_ret) - text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) for(i in 1:length(out)){ tmp <- out[[i]] Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-29 02:27:01 UTC (rev 2922) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-29 05:03:35 UTC (rev 2923) @@ -15,22 +15,21 @@ ylim = NULL, cex.axis = 0.8, element.color = "darkgray", main = "Efficient Frontier", ..., rf = 0, - cex.legend = 0.8) + cex.legend = 0.8, asset.names = TRUE) chart.EfficientFrontier.optimize.portfolio(object, match.col = "ES", n.portfolios = 25, xlim = NULL, ylim = NULL, cex.axis = 0.8, element.color = "darkgray", - main = "Efficient Frontier", ..., - RAR.text = "Modified Sharpe", rf = 0, cex.legend = 0.8) + main = "Efficient Frontier", ..., RAR.text = "SR", + rf = 0, cex.legend = 0.8, asset.names = TRUE) chart.EfficientFrontier.efficient.frontier(object, match.col = "ES", n.portfolios = NULL, xlim = NULL, ylim = NULL, cex.axis = 0.8, element.color = "darkgray", - main = "Efficient Frontier", ..., - RAR.text = "Modified Sharpe", rf = 0, - chart.assets = TRUE, cex.legend = 0.8) + main = "Efficient Frontier", ..., RAR.text = "SR", + rf = 0, cex.legend = 0.8, asset.names = TRUE) } \arguments{ \item{object}{optimal portfolio created by @@ -74,8 +73,8 @@ \item{RAR.text}{Risk Adjusted Return ratio text to plot in the legend} - \item{chart.assets}{TRUE/FALSE to include risk-return - scatter of assets} + \item{asset.names}{TRUE/FALSE to include the asset names + in the plot} } \description{ This function charts the efficient frontier and Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-29 02:27:01 UTC (rev 2922) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-29 05:03:35 UTC (rev 2923) @@ -7,7 +7,8 @@ search_size = 2000, main = "Efficient Frontiers", cex.axis = 0.8, element.color = "darkgray", legend.loc = NULL, legend.labels = NULL, - cex.legend = 0.8, xlim = NULL, ylim = NULL, ...) + cex.legend = 0.8, xlim = NULL, ylim = NULL, ..., + asset.names = TRUE) } \arguments{ \item{R}{an xts object of asset returns} @@ -57,6 +58,9 @@ \code{\link{plot}}} \item{...}{passthrough parameters to \code{\link{plot}}} + + \item{asset.names}{TRUE/FALSE to include the asset names + in the plot} } \description{ Overlay the efficient frontiers of multiple portfolio Modified: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-29 02:27:01 UTC (rev 2922) +++ pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-29 05:03:35 UTC (rev 2923) @@ -14,13 +14,13 @@ colorset = NULL, ..., n.portfolios = 25, match.col = "ES", main = "EF Weights", cex.lab = 0.8, cex.axis = 0.8, cex.legend = 0.8, legend.labels = NULL, - element.color = "darkgray") + element.color = "darkgray", legend.loc = "topright") chart.Weights.EF.optimize.portfolio(object, colorset = NULL, ..., n.portfolios = 25, match.col = "ES", main = "EF Weights", cex.lab = 0.8, cex.axis = 0.8, cex.legend = 0.8, legend.labels = NULL, - element.color = "darkgray") + element.color = "darkgray", legend.loc = "topright") } \arguments{ \item{object}{object of class \code{efficient.frontier} @@ -58,6 +58,10 @@ \item{element.color}{provides the color for drawing less-important chart elements, such as the box lines, axis lines, etc.} + + \item{legend.loc}{NULL, "topright", "right", or + "bottomright". If legend.loc is NULL, the legend will not + be plotted.} } \description{ This creates a stacked column chart of the weights of Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-29 02:27:01 UTC (rev 2922) +++ pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-29 05:03:35 UTC (rev 2923) @@ -42,12 +42,19 @@ meanvar.ef$frontier # The RAR.text argument can be used for the risk-adjusted-return name on the legend, # by default it is 'Modified Sharpe Ratio' -chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="b", RAR.text="Sharpe Ratio") +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", RAR.text="Sharpe Ratio", pch=4) # The tangency portfolio and line are plotted by default, these can be ommitted # by setting rf=NULL chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", rf=NULL) chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="StdDev") +# If you have a lot of assets and they don't fit with the default legend, you +# can set legend.loc=NULL and customize the plot. +par(mar=c(8, 4, 4, 2)+0.1, xpd=TRUE) +chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="StdDev", legend.loc=NULL) +legend("bottom", legend=colnames(R), inset=-1, fill=bluemono, bty="n", ncol=3, cex=0.8) +par(mar=c(5, 4, 4, 2)+0.1, xpd=FALSE) + # run optimize.portfolio and chart the efficient frontier for that object opt_meanvar <- optimize.portfolio(R=R, portfolio=meanvar.portf, optimize_method="ROI", trace=TRUE) @@ -116,6 +123,7 @@ portf.list <- list(lo.portf, box.portf, group.portf) legend.labels <- c("Long Only", "Box", "Group + Long Only") -chart.EfficientFrontierOverlay(R=R, portfolio_list=portf.list, type="mean-StdDev", match.col="StdDev", - legend.loc="right", legend.labels=legend.labels) +chart.EfficientFrontierOverlay(R=R, portfolio_list=portf.list, type="mean-StdDev", + match.col="StdDev", legend.loc="topleft", + legend.labels=legend.labels, cex.legend=0.6) From noreply at r-forge.r-project.org Thu Aug 29 11:00:39 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 11:00:39 +0200 (CEST) Subject: [Returnanalytics-commits] r2924 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: . .Rproj.user Message-ID: <20130829090039.EE3D11859E8@r-forge.r-project.org> Author: braverock Date: 2013-08-29 11:00:39 +0200 (Thu, 29 Aug 2013) New Revision: 2924 Removed: pkg/PerformanceAnalytics/sandbox/Shubhankit/.Rproj.user/E5D7D248/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm.Rcheck/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm_0.1.tar.gz pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm_1.0.tar.gz Log: - revmove files that shouldn't be under version control Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm_0.1.tar.gz =================================================================== (Binary files differ) Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm_1.0.tar.gz =================================================================== (Binary files differ) From noreply at r-forge.r-project.org Thu Aug 29 11:06:59 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 11:06:59 +0200 (CEST) Subject: [Returnanalytics-commits] r2925 - pkg/Meucci/R Message-ID: <20130829090659.39EC41859E8@r-forge.r-project.org> Author: xavierv Date: 2013-08-29 11:06:58 +0200 (Thu, 29 Aug 2013) New Revision: 2925 Added: pkg/Meucci/R/FitOrnsteinUhlenbeck.R Log: - fixed FitOrnsteinUhlenbeck documentation Added: pkg/Meucci/R/FitOrnsteinUhlenbeck.R =================================================================== --- pkg/Meucci/R/FitOrnsteinUhlenbeck.R (rev 0) +++ pkg/Meucci/R/FitOrnsteinUhlenbeck.R 2013-08-29 09:06:58 UTC (rev 2925) @@ -0,0 +1,55 @@ +#' Fit a multivariate OU process at estimation step tau, as described in A. Meucci +#' "Risk and Asset Allocation", Springer, 2005 +#' +#' @param Y : [matrix] (T x N) +#' @param tau : [scalar] time step +#' +#' @return Mu : [vector] long-term means +#' @return Th : [matrix] whose eigenvalues have positive real part / mean reversion speed +#' @return Sig : [matrix] Sig = S * S', covariance matrix of Brownian motions +#' +#' @note +#' o dY_t = -Th * (Y_t - Mu) * dt + S * dB_t where +#' o dB_t: vector of Brownian motions +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "FitOrnsteinUhlenbeck.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +FitOrnsteinUhlenbeck = function( Y, tau ) +{ + T = nrow(Y); + N = ncol(Y); + + X = Y[ -1, ]; + F = cbind( matrix( 1, T-1, 1 ), Y[ -nrow(Y), ] ); + E_XF = t(X) %*% F / T; + E_FF = t(F) %*% F / T; + B = E_XF %*% solve( E_FF ); + if( length( B[ , -1 ] ) != 1 ) + { + Th = -logm( B[ , -1 ] ) / tau; + + }else + { + Th = -log( B[ , -1 ] ) / tau; + } + + Mu = solve( diag( 1, N ) - B[ , -1 ] ) %*% B[ , 1 ] ; + + U = F %*% t(B) - X; + + Sig_tau = cov(U); + + N = length(Mu); + TsT = kron( Th, diag( 1, N ) ) + kron( diag( 1, N ), Th ); + + VecSig_tau = matrix(Sig_tau, N^2, 1); + VecSig = ( solve( diag( 1, N^2 ) - expm( -TsT * tau ) ) %*% TsT ) %*% VecSig_tau; + Sig = matrix( VecSig, N, N ); + + return( list( Mu = Mu, Theta = Th, Sigma = Sig ) ) +} \ No newline at end of file From noreply at r-forge.r-project.org Thu Aug 29 11:10:42 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 11:10:42 +0200 (CEST) Subject: [Returnanalytics-commits] r2926 - in pkg/Meucci: . man Message-ID: <20130829091042.416471859E8@r-forge.r-project.org> Author: xavierv Date: 2013-08-29 11:10:41 +0200 (Thu, 29 Aug 2013) New Revision: 2926 Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd Log: - generated FitOrnsteinUhlenbeck documentation Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-29 09:06:58 UTC (rev 2925) +++ pkg/Meucci/DESCRIPTION 2013-08-29 09:10:41 UTC (rev 2926) @@ -95,3 +95,4 @@ 'EfficientFrontierPrices.R' ' FitOrnsteinUhlenbeck.R' + 'FitOrnsteinUhlenbeck.R' Modified: pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd =================================================================== --- pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd 2013-08-29 09:06:58 UTC (rev 2925) +++ pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd 2013-08-29 09:10:41 UTC (rev 2926) @@ -4,11 +4,17 @@ "Risk and Asset Allocation", Springer, 2005} \usage{ FitOrnsteinUhlenbeck(Y, tau) + + FitOrnsteinUhlenbeck(Y, tau) } \arguments{ \item{Y}{: [matrix] (T x N)} \item{tau}{: [scalar] time step} + + \item{Y}{: [matrix] (T x N)} + + \item{tau}{: [scalar] time step} } \value{ Mu : [vector] long-term means @@ -18,21 +24,41 @@ Sig : [matrix] Sig = S * S', covariance matrix of Brownian motions + + Mu : [vector] long-term means + + Th : [matrix] whose eigenvalues have positive real part / + mean reversion speed + + Sig : [matrix] Sig = S * S', covariance matrix of + Brownian motions } \description{ Fit a multivariate OU process at estimation step tau, as described in A. Meucci "Risk and Asset Allocation", Springer, 2005 + + Fit a multivariate OU process at estimation step tau, as + described in A. Meucci "Risk and Asset Allocation", + Springer, 2005 } \note{ o dY_t = -Th * (Y_t - Mu) * dt + S * dB_t where o dB_t: vector of Brownian motions + + o dY_t = -Th * (Y_t - Mu) * dt + S * dB_t where o dB_t: + vector of Brownian motions } \author{ Xavier Valls \email{flamejat at gmail.com} + + Xavier Valls \email{flamejat at gmail.com} } \references{ \url{http://symmys.com/node/170} See Meucci's script for "EfficientFrontierReturns.m" + + \url{http://symmys.com/node/170} See Meucci's script for + "FitOrnsteinUhlenbeck.m" } From noreply at r-forge.r-project.org Thu Aug 29 11:11:47 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 11:11:47 +0200 (CEST) Subject: [Returnanalytics-commits] r2927 - pkg/Meucci Message-ID: <20130829091147.D65D9183E7A@r-forge.r-project.org> Author: xavierv Date: 2013-08-29 11:11:47 +0200 (Thu, 29 Aug 2013) New Revision: 2927 Modified: pkg/Meucci/DESCRIPTION Log: - fixed DESCRIPTION file Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-29 09:10:41 UTC (rev 2926) +++ pkg/Meucci/DESCRIPTION 2013-08-29 09:11:47 UTC (rev 2927) @@ -93,6 +93,4 @@ 'EfficientFrontierReturnsBenchmark.R' 'EfficientFrontierReturns.R' 'EfficientFrontierPrices.R' - ' - FitOrnsteinUhlenbeck.R' 'FitOrnsteinUhlenbeck.R' From noreply at r-forge.r-project.org Thu Aug 29 12:03:25 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 12:03:25 +0200 (CEST) Subject: [Returnanalytics-commits] r2928 - in pkg/Meucci: . R data demo man Message-ID: <20130829100325.81A3F184BB1@r-forge.r-project.org> Author: xavierv Date: 2013-08-29 12:03:24 +0200 (Thu, 29 Aug 2013) New Revision: 2928 Added: pkg/Meucci/R/PlotVolVsCompositionEfficientFrontier.R pkg/Meucci/data/covNRets.Rda pkg/Meucci/demo/S_MeanVarianceCalls.R pkg/Meucci/man/PlotVolVsCompositionEfficientFrontier.Rd Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/NAMESPACE pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd Log: - added S_MeanVarianceCalls demo script from chapter 6 and its associated functions Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-29 09:11:47 UTC (rev 2927) +++ pkg/Meucci/DESCRIPTION 2013-08-29 10:03:24 UTC (rev 2928) @@ -94,3 +94,6 @@ 'EfficientFrontierReturns.R' 'EfficientFrontierPrices.R' 'FitOrnsteinUhlenbeck.R' + ' + FitOrnsteinUhlenbeck.R' + 'PlotVolVsCompositionEfficientFrontier.R' Modified: pkg/Meucci/NAMESPACE =================================================================== --- pkg/Meucci/NAMESPACE 2013-08-29 09:11:47 UTC (rev 2927) +++ pkg/Meucci/NAMESPACE 2013-08-29 10:03:24 UTC (rev 2928) @@ -39,6 +39,7 @@ export(PerformIidAnalysis) export(PlotDistributions) export(PlotMarginalsNormalInverseWishart) +export(PlotVolVsCompositionEfficientFrontier) export(ProjectionStudentT) export(QuantileMixture) export(RandNormalInverseWishart) Added: pkg/Meucci/R/PlotVolVsCompositionEfficientFrontier.R =================================================================== --- pkg/Meucci/R/PlotVolVsCompositionEfficientFrontier.R (rev 0) +++ pkg/Meucci/R/PlotVolVsCompositionEfficientFrontier.R 2013-08-29 10:03:24 UTC (rev 2928) @@ -0,0 +1,38 @@ +#' Plot the efficient frontier in the plane of portfolio weights versus standard deviation, +#' as described in A. Meucci, "Risk and Asset Allocation", Springer, 2005. +#' +#' @param Portfolios: [matrix] (M x N) of portfolios weights +#' @param vol : [vector] (M x 1) of volatilities +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "PlotVolVsCompositionEfficientFrontier.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +PlotVolVsCompositionEfficientFrontier = function( Portfolios, vol ) +{ + + colors = c( "cyan","white","magenta","green","black","red" ); + numcolors = length(colors); + + dev.new(); + xx = dim( Portfolios )[ 1 ]; + N = dim( Portfolios )[ 2 ]; + + Data = t( apply( Portfolios, 1, cumsum ) ); + plot(c(0,0), xlim= c( min(vol)*100, max(vol)*100), ylim = c(0, max(Data)), xlab = "Risk %", ylab = "Portfolio weights") ; + + for( n in 1 : N ) + { + x = rbind( 1, matrix( 1 : xx ), xx ); + v = rbind( min(vol), vol, max(vol) ) * 100 + y = rbind( 0, matrix(Data[ , N-n+1 ]), 0 ); + polygon( v, y, col = colors[ mod( n, numcolors ) + 1 ] ); + } + #set(gca,'xlim',[v(1) v(end)],'ylim',[0 max(max(Data))]) + #xlabel('Risk %') + #ylabel('Portfolio weights') + +} \ No newline at end of file Added: pkg/Meucci/data/covNRets.Rda =================================================================== (Binary files differ) Property changes on: pkg/Meucci/data/covNRets.Rda ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/Meucci/demo/S_MeanVarianceCalls.R =================================================================== --- pkg/Meucci/demo/S_MeanVarianceCalls.R (rev 0) +++ pkg/Meucci/demo/S_MeanVarianceCalls.R 2013-08-29 10:03:24 UTC (rev 2928) @@ -0,0 +1,75 @@ +#' This script computes the mean-variance frontier of a set of options +#' Described in A. Meucci,"Risk and Asset Allocation", Springer, 2005, Chapter 6. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_MeanVarianceCalls.m" +# +#' @author Xavier Valls \email{flamejat@@gmail.com} +################################################################################################################## +### Load dat + +load("../data/db.Rda" ); + +################################################################################################################## +### Inputs + +# market +Stock_0 = db$Stock[ nrow(db$Stock), ]; +Vol_0 = db$Vol[ nrow(db$Stock), ]; +Strike = Stock_0; # ATM strike +Est = apply( diff( db$Dates ), 2, mean ) / 252; # estimation interval +Hor = 2 * Est; # investment horizon + +# constraints +N = length( Vol_0 ); +Constr = list( Aeq = matrix( 1, 1, N ), beq = 1, Aleq = rbind( diag( 1, N ), -diag( 1, N ) ), + bleq = rbind( matrix( 1, N, 1 ), matrix( 0, N, 1 ) ) + ); + +J = 10000; # num simulations + +################################################################################################################## +### Allocation process +# quest for invariance +x_Stock = diff( log( db$Stock ) ); +x_Vol = diff( log( db$Vol ) ); + +# estimation +M = matrix( apply(cbind( x_Stock, x_Vol ), 2, mean ) ); +S = cov( cbind( x_Stock, x_Vol ) ); + +# projection +M_Hor = M * Hor / Est; +S_Hor = S * Hor / Est; +X = rmvnorm( J, M_Hor, S_Hor, method = "svd" ); +X_Stock = X[ , 1:N ]; +X_Vol = X[ , (N + 1):ncol(X) ]; + +Stock_Hor = repmat( Stock_0, J, 1 ) * exp( X_Stock ); +Vol_Hor = repmat( Vol_0, J, 1 ) * exp( X_Vol ); + +################################################################################################################## +### Pricing +Call_0 = NULL; +Call_Hor = NULL; +for( n in 1 : N ) +{ + Rate = 0.04; + Call_0 = cbind( Call_0, BlackScholesCallPrice( Stock_0[ n ], Strike[ n ], Rate, Vol_0[ n ], db$Expiry[ n ] )$c ); + Call_Hor = cbind( Call_Hor, BlackScholesCallPrice(Stock_Hor[ , n ], Strike[ n ], Rate, Vol_Hor[ ,n ], db$Expiry[ n] - Hor )$c ); +} + +################################################################################################################## +### Mean-variance +L = Call_Hor / repmat( Call_0, J, 1 ) - 1; +ExpectedValues = matrix( apply( L, 2, mean) ); +Covariance = cov( L ); +NumPortf = 40; +#[e, vol, w] = +EFR = EfficientFrontierReturns( NumPortf, Covariance, ExpectedValues, Constr ); + +################################################################################################################## +### Plots +PlotVolVsCompositionEfficientFrontier( EFR$Composition, EFR$Volatility ); + Modified: pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd =================================================================== --- pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd 2013-08-29 09:11:47 UTC (rev 2927) +++ pkg/Meucci/man/FitOrnsteinUhlenbeck.Rd 2013-08-29 10:03:24 UTC (rev 2928) @@ -56,9 +56,9 @@ } \references{ \url{http://symmys.com/node/170} See Meucci's script for - "EfficientFrontierReturns.m" + "FitOrnsteinUhlenbeck.m" \url{http://symmys.com/node/170} See Meucci's script for - "FitOrnsteinUhlenbeck.m" + "EfficientFrontierReturns.m" } Added: pkg/Meucci/man/PlotVolVsCompositionEfficientFrontier.Rd =================================================================== --- pkg/Meucci/man/PlotVolVsCompositionEfficientFrontier.Rd (rev 0) +++ pkg/Meucci/man/PlotVolVsCompositionEfficientFrontier.Rd 2013-08-29 10:03:24 UTC (rev 2928) @@ -0,0 +1,26 @@ +\name{PlotVolVsCompositionEfficientFrontier} +\alias{PlotVolVsCompositionEfficientFrontier} +\title{Plot the efficient frontier in the plane of portfolio weights versus standard deviation, +as described in A. Meucci, "Risk and Asset Allocation", Springer, 2005.} +\usage{ + PlotVolVsCompositionEfficientFrontier(Portfolios, vol) +} +\arguments{ + \item{Portfolios:}{[matrix] (M x N) of portfolios + weights} + + \item{vol}{: [vector] (M x 1) of volatilities} +} +\description{ + Plot the efficient frontier in the plane of portfolio + weights versus standard deviation, as described in A. + Meucci, "Risk and Asset Allocation", Springer, 2005. +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "PlotVolVsCompositionEfficientFrontier.m" +} + From noreply at r-forge.r-project.org Thu Aug 29 12:11:43 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 12:11:43 +0200 (CEST) Subject: [Returnanalytics-commits] r2929 - in pkg/PerformanceAnalytics/sandbox/pulkit: R man vignettes Message-ID: <20130829101143.7D12C1840BC@r-forge.r-project.org> Author: pulkit Date: 2013-08-29 12:11:43 +0200 (Thu, 29 Aug 2013) New Revision: 2929 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.pdf Log: changes during R CMD check Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-29 10:03:24 UTC (rev 2928) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-29 10:11:43 UTC (rev 2929) @@ -4,21 +4,21 @@ #'@description #'The drawdown beta is formulated as follows #' -#'\deqn{\beta_DD = \frac{{\sum_{t=1}^T}{q_t^\**}{(w_{k^{\**}(t)}-w_t)}}{D_{\alpha}(w^M)}} +#'\deqn{\beta_DD = \frac{{\sum_{t=1}^T}{q_t^{*}}(w_{k^{*}(t)}-w_t)}{D_{\alpha}(w^M)}} #' here \eqn{\beta_DD} is the drawdown beta of the instrument. #'\eqn{k^{\**}(t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_k^M} #' -#'\eqn{q_t^\**=1/((1-\alpha)T)} if \eqn{d_t^M} is one of the +#'\eqn{q_t^{*}=1/((1-\alpha)T)} if \eqn{d_t^M} is one of the #'\eqn{(1-\alpha)T} largest drawdowns \eqn{d_1^{M} ,......d_t^M} of the -#'optimal portfolio and \eqn{q_t^\** = 0} otherwise. It is assumed -#'that \eqn{D_\alpha(w^M) {\neq} 0} and that \eqn{q_t^\**} and -#'\eqn{k^{\**}(t)} are uniquely determined for all \eqn{t = 1....T} +#'optimal portfolio and \eqn{q_t^{*} = 0} otherwise. It is assumed +#'that \eqn{D_{\alpha}(w^M) \neq 0} and that \eqn{q_t^{*}} and +#'\eqn{k^{*}(t)} are uniquely determined for all \eqn{t = 1....T} #' #'The numerator in \eqn{\beta_DD} is the average rate of return of the #'instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest -#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{\**}(t)} +#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{*}(t)} #'is the cumulative rate of return of the instrument from the optimal portfolio -#' peak time \eqn{k^\**(t)} to time t. +#' peak time \eqn{k^{*}(t)} to time t. #' #'The difference in CDaR and standard betas can be explained by the #'conceptual difference in beta definitions: the standard beta accounts for Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-29 10:03:24 UTC (rev 2928) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBetaMulti.R 2013-08-29 10:11:43 UTC (rev 2929) @@ -4,15 +4,15 @@ #'@description #'The drawdown beta is formulated as follows #' -#'\deqn{\beta_DD^i = \frac{{\sum_{s=1}^S}{\sum_{t=1}^T}p_s{q_t^\asterisk}{(w_{s,k^{\asterisk}(s,t)^i}-w_{st}^i)}}{D_{\alpha}(w^M)}} +#'\deqn{\beta_DD^i = \frac{{\sum_{s=1}^S}{\sum_{t=1}^T}p_s{q_t^{*}}{(w_{s,k^{*}(s,t)^i}-w_{st}^i)}}{D_{\alpha}(w^M)}} #' here \eqn{\beta_DD} is the drawdown beta of the instrument for multiple sample path. -#'\eqn{k^{\asterisk}(s,t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_{sk}^p(x^\asterisk)} +#'\eqn{k^{*}(s,t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_{sk}^p(x^{*})} #' #'The numerator in \eqn{\beta_DD} is the average rate of return of the #'instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest -#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{\asterisk}(t)} +#'drawdowns of the optimal portfolio, where \eqn{w_t - w_k^{*}(t)} #'is the cumulative rate of return of the instrument from the optimal portfolio -#' peak time \eqn{k^\asterisk(t)} to time t. +#' peak time \eqn{k^{*}(t)} to time t. #' #'The difference in CDaR and standard betas can be explained by the #'conceptual difference in beta definitions: the standard beta accounts for Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-29 10:03:24 UTC (rev 2928) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/PSRopt.R 2013-08-29 10:11:43 UTC (rev 2929) @@ -3,14 +3,14 @@ #'Maximizing for PSR leads to better diversified and more balanced hedge fund allocations compared to the concentrated #'outcomes of Sharpe ratio maximization.We would like to find the vector of weights that maximize the expression #' -#'\deqn{\hat{PSR}(SR^\**) = Z\biggl[\frac{(\hat{SR}-SR^\**)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\** + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} +#'\deqn{\hat{PSR}(SR^{*}) = Z\bigg[\frac{(\hat{SR}-SR^{*})\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^{*} + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\bigg]} #' #'where \eqn{\sigma = \sqrt{E[(r-\mu)^2]}} ,its standard deviation.\eqn{\gamma_3=\frac{E\biggl[(r-\mu)^3\biggr]}{\sigma^3}} its skewness, -#'\eqn{\gamma_4=\frac{E\biggl[(r-\mu)^4\biggr]}{\sigma^4}} its kurtosis and \eqn{SR = \frac{\mu}{\sigma}} its Sharpe Ratio. -#'Because \eqn{\hat{PSR}(SR^\**)=Z[\hat{Z^\**}]} is a monotonic increasing function of -#'\eqn{\hat{Z^\**}} ,it suffices to compute the vector that maximizes \eqn{\hat{Z^\**}} +#'\eqn{\gamma_4=\frac{E\bigg[(r-\mu)^4\bigg]}{\sigma^4}} its kurtosis and \eqn{SR = \frac{\mu}{\sigma}} its Sharpe Ratio. +#'Because \eqn{\hat{PSR}(SR^{*})=Z[\hat{Z^{*}}]} is a monotonic increasing function of +#'\eqn{\hat{Z^{*}}} ,it suffices to compute the vector that maximizes \eqn{\hat{Z^{*}}} #' -#'This optimal vector is invariant of the value adopted by the parameter \eqn{SR^\**}. +#'This optimal vector is invariant of the value adopted by the parameter \eqn{SR^{*}}. #'Gradient Ascent Logic is used to compute the weights using the Function PsrPortfolio #'@aliases PsrPortfolio #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R 2013-08-29 10:03:24 UTC (rev 2928) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ProbSharpeRatio.R 2013-08-29 10:11:43 UTC (rev 2929) @@ -8,7 +8,7 @@ #' probability of skill. The reference Sharpe Ratio should be less than #' the Observed Sharpe Ratio. #' -#' \deqn{\hat{PSR}(SR^\ast) = Z\biggl[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} +#' \deqn{\hat{PSR}(SR^{*}) = Z\bigg[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\bigg]} #' Here \eqn{n} is the track record length or the number of data points. It can be daily,weekly or yearly depending on the input given Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-29 10:03:24 UTC (rev 2928) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/BetaDrawdown.Rd 2013-08-29 10:11:43 UTC (rev 2929) @@ -35,24 +35,24 @@ The drawdown beta is formulated as follows \deqn{\beta_DD = - \frac{{\sum_{t=1}^T}{q_t^\**}{(w_{k^{\**}(t)}-w_t)}}{D_{\alpha}(w^M)}} + \frac{{\sum_{t=1}^T}{q_t^{*}}(w_{k^{*}(t)}-w_t)}{D_{\alpha}(w^M)}} here \eqn{\beta_DD} is the drawdown beta of the instrument. \eqn{k^{\**}(t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_k^M} - \eqn{q_t^\**=1/((1-\alpha)T)} if \eqn{d_t^M} is one of + \eqn{q_t^{*}=1/((1-\alpha)T)} if \eqn{d_t^M} is one of the \eqn{(1-\alpha)T} largest drawdowns \eqn{d_1^{M} - ,......d_t^M} of the optimal portfolio and \eqn{q_t^\** = - 0} otherwise. It is assumed that \eqn{D_\alpha(w^M) - {\neq} 0} and that \eqn{q_t^\**} and \eqn{k^{\**}(t)} are + ,......d_t^M} of the optimal portfolio and \eqn{q_t^{*} = + 0} otherwise. It is assumed that \eqn{D_{\alpha}(w^M) + \neq 0} and that \eqn{q_t^{*}} and \eqn{k^{*}(t)} are uniquely determined for all \eqn{t = 1....T} The numerator in \eqn{\beta_DD} is the average rate of return of the instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest drawdowns of the optimal - portfolio, where \eqn{w_t - w_k^{\**}(t)} is the - cumulative rate of return of the instrument from the - optimal portfolio peak time \eqn{k^\**(t)} to time t. + portfolio, where \eqn{w_t - w_k^{*}(t)} is the cumulative + rate of return of the instrument from the optimal + portfolio peak time \eqn{k^{*}(t)} to time t. The difference in CDaR and standard betas can be explained by the conceptual difference in beta Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd 2013-08-29 10:03:24 UTC (rev 2928) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/MultiBetaDrawdown.Rd 2013-08-29 10:11:43 UTC (rev 2929) @@ -40,18 +40,17 @@ The drawdown beta is formulated as follows \deqn{\beta_DD^i = - \frac{{\sum_{s=1}^S}{\sum_{t=1}^T}p_s{q_t^\asterisk}{(w_{s,k^{\asterisk}(s,t)^i}-w_{st}^i)}}{D_{\alpha}(w^M)}} + \frac{{\sum_{s=1}^S}{\sum_{t=1}^T}p_s{q_t^{*}}{(w_{s,k^{*}(s,t)^i}-w_{st}^i)}}{D_{\alpha}(w^M)}} here \eqn{\beta_DD} is the drawdown beta of the instrument for multiple sample path. - \eqn{k^{\asterisk}(s,t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_{sk}^p(x^\asterisk)} + \eqn{k^{*}(s,t)\in{argmax_{t_{\tau}{\le}k{\le}t}}w_{sk}^p(x^{*})} The numerator in \eqn{\beta_DD} is the average rate of return of the instrument over time periods corresponding to the \eqn{(1-\alpha)T} largest drawdowns of the optimal - portfolio, where \eqn{w_t - w_k^{\asterisk}(t)} is the - cumulative rate of return of the instrument from the - optimal portfolio peak time \eqn{k^\asterisk(t)} to time - t. + portfolio, where \eqn{w_t - w_k^{*}(t)} is the cumulative + rate of return of the instrument from the optimal + portfolio peak time \eqn{k^{*}(t)} to time t. The difference in CDaR and standard betas can be explained by the conceptual difference in beta Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd 2013-08-29 10:03:24 UTC (rev 2928) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/ProbSharpeRatio.Rd 2013-08-29 10:11:43 UTC (rev 2929) @@ -47,9 +47,9 @@ of skill. The reference Sharpe Ratio should be less than the Observed Sharpe Ratio. - \deqn{\hat{PSR}(SR^\ast) = - Z\biggl[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast - + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} Here + \deqn{\hat{PSR}(SR^{*}) = + Z\bigg[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast + + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\bigg]} Here \eqn{n} is the track record length or the number of data points. It can be daily,weekly or yearly depending on the input given \eqn{\hat{\gamma{_3}}} and Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd 2013-08-29 10:03:24 UTC (rev 2928) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/PsrPortfolio.Rd 2013-08-29 10:11:43 UTC (rev 2929) @@ -23,22 +23,22 @@ would like to find the vector of weights that maximize the expression - \deqn{\hat{PSR}(SR^\**) = - Z\biggl[\frac{(\hat{SR}-SR^\**)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\** - + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} + \deqn{\hat{PSR}(SR^{*}) = + Z\bigg[\frac{(\hat{SR}-SR^{*})\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^{*} + + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\bigg]} where \eqn{\sigma = \sqrt{E[(r-\mu)^2]}} ,its standard deviation.\eqn{\gamma_3=\frac{E\biggl[(r-\mu)^3\biggr]}{\sigma^3}} its skewness, - \eqn{\gamma_4=\frac{E\biggl[(r-\mu)^4\biggr]}{\sigma^4}} + \eqn{\gamma_4=\frac{E\bigg[(r-\mu)^4\bigg]}{\sigma^4}} its kurtosis and \eqn{SR = \frac{\mu}{\sigma}} its Sharpe - Ratio. Because \eqn{\hat{PSR}(SR^\**)=Z[\hat{Z^\**}]} is - a monotonic increasing function of \eqn{\hat{Z^\**}} ,it + Ratio. Because \eqn{\hat{PSR}(SR^{*})=Z[\hat{Z^{*}}]} is + a monotonic increasing function of \eqn{\hat{Z^{*}}} ,it suffices to compute the vector that maximizes - \eqn{\hat{Z^\**}} + \eqn{\hat{Z^{*}}} This optimal vector is invariant of the value adopted by - the parameter \eqn{SR^\**}. Gradient Ascent Logic is used + the parameter \eqn{SR^{*}}. Gradient Ascent Logic is used to compute the weights using the Function PsrPortfolio } \examples{ Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.pdf =================================================================== (Binary files differ) From noreply at r-forge.r-project.org Thu Aug 29 12:47:04 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 12:47:04 +0200 (CEST) Subject: [Returnanalytics-commits] r2930 - in pkg/Meucci: . R demo man Message-ID: <20130829104704.6FC44184DB2@r-forge.r-project.org> Author: xavierv Date: 2013-08-29 12:47:04 +0200 (Thu, 29 Aug 2013) New Revision: 2930 Added: pkg/Meucci/R/BlackLittermanFormula.R pkg/Meucci/R/Log2Lin.R pkg/Meucci/R/PlotCompositionEfficientFrontier.R pkg/Meucci/demo/S_BlackLittermanBasic.R pkg/Meucci/man/BlackLittermanFormula.Rd pkg/Meucci/man/Log2Lin.Rd pkg/Meucci/man/PlotCompositionEfficientFrontier.Rd Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/NAMESPACE pkg/Meucci/R/PlotVolVsCompositionEfficientFrontier.R Log: - added S_BlackLittermanBasic.R demo script from chapter 9 and its associated functions Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-29 10:11:43 UTC (rev 2929) +++ pkg/Meucci/DESCRIPTION 2013-08-29 10:47:04 UTC (rev 2930) @@ -94,6 +94,9 @@ 'EfficientFrontierReturns.R' 'EfficientFrontierPrices.R' 'FitOrnsteinUhlenbeck.R' + 'PlotVolVsCompositionEfficientFrontier.R' + 'BlackLittermanFormula.R' ' FitOrnsteinUhlenbeck.R' - 'PlotVolVsCompositionEfficientFrontier.R' + 'Log2Lin.R' + 'PlotCompositionEfficientFrontier.R' Modified: pkg/Meucci/NAMESPACE =================================================================== --- pkg/Meucci/NAMESPACE 2013-08-29 10:11:43 UTC (rev 2929) +++ pkg/Meucci/NAMESPACE 2013-08-29 10:47:04 UTC (rev 2930) @@ -1,3 +1,4 @@ +export(BlackLittermanFormula) export(BlackScholesCallPrice) export(Central2Raw) export(CentralAndStandardizedStatistics) @@ -25,6 +26,7 @@ export(integrateSubIntervals) export(InterExtrapolate) export(linreturn) +export(Log2Lin) export(LognormalCopulaPdf) export(LognormalMoments2Parameters) export(LognormalParam2Statistics) @@ -37,6 +39,7 @@ export(PanicCopula) export(PartialConfidencePosterior) export(PerformIidAnalysis) +export(PlotCompositionEfficientFrontier) export(PlotDistributions) export(PlotMarginalsNormalInverseWishart) export(PlotVolVsCompositionEfficientFrontier) Added: pkg/Meucci/R/BlackLittermanFormula.R =================================================================== --- pkg/Meucci/R/BlackLittermanFormula.R (rev 0) +++ pkg/Meucci/R/BlackLittermanFormula.R 2013-08-29 10:47:04 UTC (rev 2930) @@ -0,0 +1,27 @@ +#' This function computes the Black-Litterman formula for the moments of the posterior normal, as described in +#' A. Meucci, "Risk and Asset Allocation", Springer, 2005. +#' +#' @param Mu : [vector] (N x 1) prior expected values. +#' @param Sigma : [matrix] (N x N) prior covariance matrix. +#' @param P : [matrix] (K x N) pick matrix. +#' @param v : [vector] (K x 1) vector of views. +#' @param Omega : [matrix] (K x K) matrix of confidence. +#' +#' @return BLMu : [vector] (N x 1) posterior expected values. +#' @return BLSigma : [matrix] (N x N) posterior covariance matrix. +#' +#' @references +#' \url{http://} +#' See Meucci's script for "BlackLittermanFormula.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +BlackLittermanFormula = function( Mu, Sigma, P, v, Omega) +{ + BLMu = Mu + Sigma %*% t( P ) %*% ( solve( P %*% Sigma %*% t( P ) + Omega ) %*% ( v - P %*% Mu ) ); + BLSigma = Sigma - Sigma %*% t( P ) %*% ( solve( P %*% Sigma %*% t( P ) + Omega ) %*% ( P %*% Sigma ) ); + + return( list( BLMu = BLMu , BLSigma = BLSigma ) ); + +} \ No newline at end of file Added: pkg/Meucci/R/Log2Lin.R =================================================================== --- pkg/Meucci/R/Log2Lin.R (rev 0) +++ pkg/Meucci/R/Log2Lin.R 2013-08-29 10:47:04 UTC (rev 2930) @@ -0,0 +1,23 @@ +#' Map moments of log-returns to linear returns, as described in A. Meucci, +#' "Risk and Asset Allocation", Springer, 2005. +#' +#' @param Mu : [vector] (N x 1) +#' @param Sigma : [matrix] (N x N) +#' +#' @return M : [vector] (N x 1) +#' @return S : [matrix] (N x N) +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "Log2Lin.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +Log2Lin = function( Mu, Sigma ) +{ + M = exp( Mu + (1/2) * diag( Sigma )) - 1; + S = exp( Mu + (1/2) * diag( Sigma )) %*% t( exp( Mu + ( 1/2 ) * diag(Sigma) ) ) * ( exp( Sigma ) - 1 ); + + return( list( M = M, S = S ) ); +} \ No newline at end of file Added: pkg/Meucci/R/PlotCompositionEfficientFrontier.R =================================================================== --- pkg/Meucci/R/PlotCompositionEfficientFrontier.R (rev 0) +++ pkg/Meucci/R/PlotCompositionEfficientFrontier.R 2013-08-29 10:47:04 UTC (rev 2930) @@ -0,0 +1,30 @@ +#' Plot the efficient frontier, as described in A. Meucci, +#' "Risk and Asset Allocation", Springer, 2005. +#' +#' @param Portfolios : [matrix] (M x N) M portfolios of size N (weights) +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "PlotCompositionEfficientFrontier.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +PlotCompositionEfficientFrontier = function(Portfolios) +{ + dev.new(); + + xx = dim( Portfolios )[ 1 ]; + N = dim( Portfolios )[ 2 ]; + Data = t( apply( Portfolios, 1, cumsum ) ); + + plot( c(2000, 2000), xlim= c( 1, xx ), ylim = c( 0, max(Data) ), xlab = " Portfolio # risk propensity", ylab = "Portfolio composition" ); + + for( n in 1 : N ) + { + x = rbind( 1, matrix(1 : xx), xx ); + y = rbind( 0, matrix( Data[ , N-n+1 ] ), 0 ); + polygon( x, y, col = rgb( 0.9 - mod(n,3)*0.2, 0.9 - mod(n,3)*0.2, 0.9 - mod(n,3)*0.2) ); + } + +} \ No newline at end of file Modified: pkg/Meucci/R/PlotVolVsCompositionEfficientFrontier.R =================================================================== --- pkg/Meucci/R/PlotVolVsCompositionEfficientFrontier.R 2013-08-29 10:11:43 UTC (rev 2929) +++ pkg/Meucci/R/PlotVolVsCompositionEfficientFrontier.R 2013-08-29 10:47:04 UTC (rev 2930) @@ -31,8 +31,5 @@ y = rbind( 0, matrix(Data[ , N-n+1 ]), 0 ); polygon( v, y, col = colors[ mod( n, numcolors ) + 1 ] ); } - #set(gca,'xlim',[v(1) v(end)],'ylim',[0 max(max(Data))]) - #xlabel('Risk %') - #ylabel('Portfolio weights') } \ No newline at end of file Added: pkg/Meucci/demo/S_BlackLittermanBasic.R =================================================================== --- pkg/Meucci/demo/S_BlackLittermanBasic.R (rev 0) +++ pkg/Meucci/demo/S_BlackLittermanBasic.R 2013-08-29 10:47:04 UTC (rev 2930) @@ -0,0 +1,34 @@ +#' This script describes to basic market-based Black-Litterman approach in particular: +#' - full confidence = conditional +#' - no confidence = reference model +#' Described in A. Meucci, "Risk and Asset Allocation", +#' Springer, 2005, Chapter 9. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_BlackLittermanBasic.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Load inputs +load("../data/covNRets.Rda"); + +################################################################################################################## +### Compute efficient frontier +NumPortf = 40; # number of MV-efficient portfolios +L2L = Log2Lin( covNRets$Mu, covNRets$Sigma ); +EFR = EfficientFrontierReturns( NumPortf, L2L$S, L2L$M ); +PlotCompositionEfficientFrontier( EFR$Composition ); + +################################################################################################################## +### Modify expected returns the Black-Litterman way and compute new efficient frontier +P = cbind( 1, 0, 0, 0, 0, -1 ); # pick matrix +Omega = P %*% covNRets$Sigma %*% t( P ); +Views = sqrt( diag( Omega ) ); # views value + +B = BlackLittermanFormula( covNRets$Mu, covNRets$Sigma, P, Views, Omega ); + +L2LBL = Log2Lin( B$BLMu, B$BLSigma ); +EFRBL = EfficientFrontierReturns( NumPortf, L2LBL$S, L2LBL$M ); +PlotCompositionEfficientFrontier( EFRBL$Composition ); Added: pkg/Meucci/man/BlackLittermanFormula.Rd =================================================================== --- pkg/Meucci/man/BlackLittermanFormula.Rd (rev 0) +++ pkg/Meucci/man/BlackLittermanFormula.Rd 2013-08-29 10:47:04 UTC (rev 2930) @@ -0,0 +1,36 @@ +\name{BlackLittermanFormula} +\alias{BlackLittermanFormula} +\title{This function computes the Black-Litterman formula for the moments of the posterior normal, as described in +A. Meucci, "Risk and Asset Allocation", Springer, 2005.} +\usage{ + BlackLittermanFormula(Mu, Sigma, P, v, Omega) +} +\arguments{ + \item{Mu}{: [vector] (N x 1) prior expected values.} + + \item{Sigma}{: [matrix] (N x N) prior covariance matrix.} + + \item{P}{: [matrix] (K x N) pick matrix.} + + \item{v}{: [vector] (K x 1) vector of views.} + + \item{Omega}{: [matrix] (K x K) matrix of confidence.} +} +\value{ + BLMu : [vector] (N x 1) posterior expected values. + + BLSigma : [matrix] (N x N) posterior covariance matrix. +} +\description{ + This function computes the Black-Litterman formula for + the moments of the posterior normal, as described in A. + Meucci, "Risk and Asset Allocation", Springer, 2005. +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://} See Meucci's script for + "BlackLittermanFormula.m" +} + Added: pkg/Meucci/man/Log2Lin.Rd =================================================================== --- pkg/Meucci/man/Log2Lin.Rd (rev 0) +++ pkg/Meucci/man/Log2Lin.Rd 2013-08-29 10:47:04 UTC (rev 2930) @@ -0,0 +1,30 @@ +\name{Log2Lin} +\alias{Log2Lin} +\title{Map moments of log-returns to linear returns, as described in A. Meucci, +"Risk and Asset Allocation", Springer, 2005.} +\usage{ + Log2Lin(Mu, Sigma) +} +\arguments{ + \item{Mu}{: [vector] (N x 1)} + + \item{Sigma}{: [matrix] (N x N)} +} +\value{ + M : [vector] (N x 1) + + S : [matrix] (N x N) +} +\description{ + Map moments of log-returns to linear returns, as + described in A. Meucci, "Risk and Asset Allocation", + Springer, 2005. +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "Log2Lin.m" +} + Added: pkg/Meucci/man/PlotCompositionEfficientFrontier.Rd =================================================================== --- pkg/Meucci/man/PlotCompositionEfficientFrontier.Rd (rev 0) +++ pkg/Meucci/man/PlotCompositionEfficientFrontier.Rd 2013-08-29 10:47:04 UTC (rev 2930) @@ -0,0 +1,23 @@ +\name{PlotCompositionEfficientFrontier} +\alias{PlotCompositionEfficientFrontier} +\title{Plot the efficient frontier, as described in A. Meucci, +"Risk and Asset Allocation", Springer, 2005.} +\usage{ + PlotCompositionEfficientFrontier(Portfolios) +} +\arguments{ + \item{Portfolios}{: [matrix] (M x N) M portfolios of size + N (weights)} +} +\description{ + Plot the efficient frontier, as described in A. Meucci, + "Risk and Asset Allocation", Springer, 2005. +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "PlotCompositionEfficientFrontier.m" +} + From noreply at r-forge.r-project.org Thu Aug 29 18:49:14 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 18:49:14 +0200 (CEST) Subject: [Returnanalytics-commits] r2931 - in pkg/PortfolioAnalytics: R sandbox Message-ID: <20130829164914.C311B185D32@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-29 18:49:14 +0200 (Thu, 29 Aug 2013) New Revision: 2931 Added: pkg/PortfolioAnalytics/sandbox/testing_risk_budget.R Modified: pkg/PortfolioAnalytics/R/constrained_objective.R Log: Adding penalty for risk_budget_objective when min_concentration=TRUE. Adding testing script for equal CVaR pct_contrib to sandbox - includes examples with and without cleaned returns. Modified: pkg/PortfolioAnalytics/R/constrained_objective.R =================================================================== --- pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-29 10:47:04 UTC (rev 2930) +++ pkg/PortfolioAnalytics/R/constrained_objective.R 2013-08-29 16:49:14 UTC (rev 2931) @@ -662,12 +662,20 @@ # Combined min_con and min_dif to take advantage of a better concentration obj measure if(!is.null(objective$min_difference) || !is.null(objective$min_concentration)){ if(isTRUE(objective$min_difference)){ - # max_diff<-max(tmp_measure[[2]]-(sum(tmp_measure[[2]])/length(tmp_measure[[2]]))) #second element is the contribution in absolute terms + # max_diff<-max(tmp_measure[[2]]-(sum(tmp_measure[[2]])/length(tmp_measure[[2]]))) #second element is the contribution in absolute terms # Uses Herfindahl index to calculate concentration; added scaling perc diffs back to univariate numbers max_diff <- sqrt(sum(tmp_measure[[3]]^2))/100 #third element is the contribution in percentage terms # out = out + penalty * objective$multiplier * max_diff out = out + penalty*objective$multiplier * max_diff } + if(isTRUE(objective$min_concentration)){ + # use HHI to calculate concentration + # actual HHI + act_hhi <- sum(tmp_measure[[3]]^2) + # minimum possible HHI + min_hhi <- sum(rep(1/length(tmp_measure[[3]]), length(tmp_measure[[3]]))^2)/100 + out <- out + penalty * objective$multiplier * abs(act_hhi - min_hhi) + } } } # end handling of risk_budget objective Added: pkg/PortfolioAnalytics/sandbox/testing_risk_budget.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_risk_budget.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_risk_budget.R 2013-08-29 16:49:14 UTC (rev 2931) @@ -0,0 +1,79 @@ +library(PortfolioAnalytics) +library(DEoptim) +data(indexes) +indexes <- indexes[,1:4] +# Clean the returns first rather than passing in as an argument +R.clean <- Return.clean(R=indexes[, 1:4], method="boudt") + +# Create the portfolio specification object +init.portf <- portfolio.spec(assets=colnames(indexes[,1:4])) +# Add box constraints +init.portf <- add.constraint(portfolio=init.portf, type='box', min = 0, max=1) +# Add the full investment constraint that specifies the weights must sum to 1. +init.portf <- add.constraint(portfolio=init.portf, type="full_investment") + +# Add objective for min CVaR concentration +min_conc <- add.objective(portfolio=init.portf, type="risk_budget_objective", + name="CVaR", arguments=list(p=0.95), + min_concentration=TRUE) + +# Add objective for min CVaR difference +min_diff <- add.objective(portfolio=init.portf, type="risk_budget_objective", + name="CVaR", arguments=list(p=0.95), + min_difference=TRUE) + +# min concentration +set.seed(1234) +opt_min_conc <- optimize.portfolio(R=R.clean, portfolio=min_conc, + optimize_method="DEoptim", search_size=5000) +# near equal risk pct_contrib portfolio +print(opt_min_conc) + +# min difference +set.seed(1234) +opt_min_diff <- optimize.portfolio(R=indexes, portfolio=min_diff, + optimize_method="DEoptim", search_size=5000) +# Not getting an equal risk contrib using min_difference in risk_budget_objective +# US Bonds have a approx a 2.5% contrib and the rest are 30%-35% +print(opt_min_diff) + +# min difference with cleaned returns +set.seed(1234) +opt_min_diff_clean <- optimize.portfolio(R=R.clean, portfolio=min_diff, + optimize_method="DEoptim", search_size=5000) +# near equal risk pct_contrib with cleaned returns +print(opt_min_diff_clean) + +# demonstrate error with clean="boudt" in arguments in objective +# Add objective for min CVaR concentration +min_conc_clean <- add.objective(portfolio=init.portf, type="risk_budget_objective", + name="CVaR", arguments=list(p=0.95, clean="boudt"), + min_concentration=TRUE) + +# If I pass in arguments for dots such as itermax=50, then I get an error with +# clean.boudt. +# Error in clean.boudt(na.omit(R[, column, drop = FALSE]), alpha = alpha, : +# unused argument(s) (itermax = 50) +set.seed(1234) +opt <- optimize.portfolio(R=R.clean, portfolio=min_conc_clean, + optimize_method="DEoptim", search_size=5000, + itermax=50) +traceback() + +# Upon insepecting traceback(), it looks like the error is due to +# Return.clean(R, method = objective$arguments.clean, ...) where the dots +# are picking up the dots arguments from optimize.portfolio. Is there a way +# to correct this? I suppose one way is to clean the returns first and not +# specify clean in the arguments list in the objective. The dots argument in +# optimize.portfolio can then be used to control the parameters for the solvers. + +# library(iterators) +# set.seed(1234) +# out <- optimize.portfolio.rebalancing(R=indexes, portfolio=ObjSpec, +# optimize_method="DEoptim", search_size=5000, +# rebalance_on="months", +# training_period=nrow(indexes)-1) +# +# ES(indexes[,1:4], weights=out$weights, p=0.95, clean="boudt", +# portfolio_method="component") +# constrained_objective(w=rep(1/4, 4), R=indexes[,1:4], portfolio=ObjSpec) From noreply at r-forge.r-project.org Thu Aug 29 19:30:10 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 19:30:10 +0200 (CEST) Subject: [Returnanalytics-commits] r2932 - in pkg/PortfolioAnalytics: R man Message-ID: <20130829173010.3859B1840BC@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-29 19:30:09 +0200 (Thu, 29 Aug 2013) New Revision: 2932 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd Log: Adding pch.assets parameter for efficient frontier. Change efficient frontier chart for optimize.portfolio objects to use plot for the efficient frontier and points for the assets. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 16:49:14 UTC (rev 2931) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 17:30:09 UTC (rev 2932) @@ -48,6 +48,7 @@ #' @param cex.legend A numerical value giving the amount by which the legend should be magnified relative to the default. #' @param RAR.text Risk Adjusted Return ratio text to plot in the legend #' @param asset.names TRUE/FALSE to include the asset names in the plot +#' @param pch.assets plotting character of the assets, same as in \code{\link{plot}} #' @author Ross Bennett #' @export chart.EfficientFrontier <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ @@ -56,7 +57,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, cex.legend=0.8, asset.names=TRUE){ +chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, cex.legend=0.8, asset.names=TRUE, pch.assets=21){ if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class optimize.portfolio.ROI") portf <- object$portfolio @@ -123,11 +124,13 @@ ylim[2] <- ylim[2] * 1.1 } - # plot a scatter of the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + # plot the efficient frontier line + plot(x=x.f, y=y.f, ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret, pch=pch.assets) if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) - # plot the efficient line - lines(x=x.f, y=y.f, col="darkgray", lwd=2) + # plot the optimal portfolio points(opt_risk, opt_ret, col="blue", pch=16) # optimal text(x=opt_risk, y=opt_ret, labels="Optimal",col="blue", pos=4, cex=0.8) @@ -147,7 +150,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, cex.legend=0.8, asset.names=TRUE){ +chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, cex.legend=0.8, asset.names=TRUE, pch.assets=21){ # This function will work with objects of class optimize.portfolio.DEoptim, # optimize.portfolio.random, and optimize.portfolio.pso @@ -212,11 +215,13 @@ ylim[2] <- ylim[2] * 1.1 } - # plot a scatter of the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + # plot the efficient frontier line + plot(x=x.f, y=y.f, ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret, pch=pch.assets) if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) - # plot the efficient line - lines(x=x.f, y=y.f, col="darkgray", lwd=2) + # plot the optimal portfolio points(opt_risk, opt_ret, col="blue", pch=16) # optimal text(x=opt_risk, y=opt_ret, labels="Optimal",col="blue", pos=4, cex=0.8) @@ -375,7 +380,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, cex.legend=0.8, asset.names=TRUE){ +chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, cex.legend=0.8, asset.names=TRUE, pch.assets=21){ if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") # get the returns and efficient frontier object @@ -426,7 +431,7 @@ plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret) + points(x=asset_risk, y=asset_ret, pch=pch.assets) if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) if(!is.null(rf)){ @@ -465,9 +470,10 @@ #' @param ylim set the y-axis limit, same as in \code{\link{plot}} #' @param ... passthrough parameters to \code{\link{plot}} #' @param asset.names TRUE/FALSE to include the asset names in the plot +#' @param pch.assets plotting character of the assets, same as in \code{\link{plot}} #' @author Ross Bennett #' @export -chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ..., asset.names=TRUE){ +chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ..., asset.names=TRUE, pch.assets=21){ # create multiple efficient frontier objects (one per portfolio in portfolio_list) if(!is.list(portfolio_list)) stop("portfolio_list must be passed in as a list") if(length(portfolio_list) == 1) warning("Only one portfolio object in portfolio_list") @@ -495,7 +501,7 @@ } # plot the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, pch=pch.assets, ...) axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-29 16:49:14 UTC (rev 2931) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-29 17:30:09 UTC (rev 2932) @@ -15,21 +15,23 @@ ylim = NULL, cex.axis = 0.8, element.color = "darkgray", main = "Efficient Frontier", ..., rf = 0, - cex.legend = 0.8, asset.names = TRUE) + cex.legend = 0.8, asset.names = TRUE, pch.assets = 21) chart.EfficientFrontier.optimize.portfolio(object, match.col = "ES", n.portfolios = 25, xlim = NULL, ylim = NULL, cex.axis = 0.8, element.color = "darkgray", main = "Efficient Frontier", ..., RAR.text = "SR", - rf = 0, cex.legend = 0.8, asset.names = TRUE) + rf = 0, cex.legend = 0.8, asset.names = TRUE, + pch.assets = 21) chart.EfficientFrontier.efficient.frontier(object, match.col = "ES", n.portfolios = NULL, xlim = NULL, ylim = NULL, cex.axis = 0.8, element.color = "darkgray", main = "Efficient Frontier", ..., RAR.text = "SR", - rf = 0, cex.legend = 0.8, asset.names = TRUE) + rf = 0, cex.legend = 0.8, asset.names = TRUE, + pch.assets = 21) } \arguments{ \item{object}{optimal portfolio created by @@ -75,6 +77,9 @@ \item{asset.names}{TRUE/FALSE to include the asset names in the plot} + + \item{pch.assets}{plotting character of the assets, same + as in \code{\link{plot}}} } \description{ This function charts the efficient frontier and Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-29 16:49:14 UTC (rev 2931) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-29 17:30:09 UTC (rev 2932) @@ -8,7 +8,7 @@ cex.axis = 0.8, element.color = "darkgray", legend.loc = NULL, legend.labels = NULL, cex.legend = 0.8, xlim = NULL, ylim = NULL, ..., - asset.names = TRUE) + asset.names = TRUE, pch.assets = 21) } \arguments{ \item{R}{an xts object of asset returns} @@ -61,6 +61,9 @@ \item{asset.names}{TRUE/FALSE to include the asset names in the plot} + + \item{pch.assets}{plotting character of the assets, same + as in \code{\link{plot}}} } \description{ Overlay the efficient frontiers of multiple portfolio From noreply at r-forge.r-project.org Thu Aug 29 21:33:20 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 21:33:20 +0200 (CEST) Subject: [Returnanalytics-commits] r2933 - in pkg/PortfolioAnalytics: R man Message-ID: <20130829193320.C4ECB185D4D@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-29 21:33:20 +0200 (Thu, 29 Aug 2013) New Revision: 2933 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd Log: Adding options for efficient frontier charts: 1) tangent line 2) chart assets. Changed names.asset to labels.assets for naming consistency with other arguments. The lower values for xlim and ylim now default to 0. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 17:30:09 UTC (rev 2932) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 19:33:20 UTC (rev 2933) @@ -45,9 +45,12 @@ #' @param main a main title for the plot #' @param ... passthrough parameters to \code{\link{plot}} #' @param rf risk free rate. If \code{rf} is not null, the maximum Sharpe Ratio or modified Sharpe Ratio tangency portfolio will be plotted +#' @param tangent.line TRUE/FALSE to plot the tangent line #' @param cex.legend A numerical value giving the amount by which the legend should be magnified relative to the default. #' @param RAR.text Risk Adjusted Return ratio text to plot in the legend -#' @param asset.names TRUE/FALSE to include the asset names in the plot +#' @param chart.assets TRUE/FALSE to include the assets +#' @param labels.assets TRUE/FALSE to include the asset names in the plot. +#' \code{chart.assets} must be \code{TRUE} to plot asset names #' @param pch.assets plotting character of the assets, same as in \code{\link{plot}} #' @author Ross Bennett #' @export @@ -57,7 +60,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, cex.legend=0.8, asset.names=TRUE, pch.assets=21){ +chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21){ if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class optimize.portfolio.ROI") portf <- object$portfolio @@ -115,30 +118,35 @@ # set the x and y limits if(is.null(xlim)){ xlim <- range(c(x.f, asset_risk)) - xlim[1] <- xlim[1] * 0.8 + # xlim[1] <- xlim[1] * 0.8 + xlim[1] <- 0 xlim[2] <- xlim[2] * 1.15 } if(is.null(ylim)){ ylim <- range(c(y.f, asset_ret)) - ylim[1] <- ylim[1] * 0.9 + # ylim[1] <- ylim[1] * 0.9 + ylim[1] <- 0 ylim[2] <- ylim[2] * 1.1 } # plot the efficient frontier line plot(x=x.f, y=y.f, ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) - # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret, pch=pch.assets) - if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + if(chart.assets){ + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret, pch=pch.assets) + if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + } # plot the optimal portfolio points(opt_risk, opt_ret, col="blue", pch=16) # optimal text(x=opt_risk, y=opt_ret, labels="Optimal",col="blue", pos=4, cex=0.8) if(!is.null(rf)){ # Plot tangency line and points at risk-free rate and tangency portfolio - abline(rf, srmax, lty=2) + if(tangent.line) abline(rf, srmax, lty=2) points(0, rf, pch=16) points(x.f[idx.maxsr], y.f[idx.maxsr], pch=16) + text(x=x.f[idx.maxsr], y=y.f[idx.maxsr], labels="T", pos=4, cex=0.8) # Add lengend with max Sharpe Ratio and risk-free rate legend("topleft", paste("Max ", rar, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) @@ -150,7 +158,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, cex.legend=0.8, asset.names=TRUE, pch.assets=21){ +chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21){ # This function will work with objects of class optimize.portfolio.DEoptim, # optimize.portfolio.random, and optimize.portfolio.pso @@ -206,30 +214,35 @@ # set the x and y limits if(is.null(xlim)){ xlim <- range(c(x.f, asset_risk)) - xlim[1] <- xlim[1] * 0.8 + # xlim[1] <- xlim[1] * 0.8 + xlim[1] <- 0 xlim[2] <- xlim[2] * 1.15 } if(is.null(ylim)){ ylim <- range(c(y.f, asset_ret)) - ylim[1] <- ylim[1] * 0.9 + # ylim[1] <- ylim[1] * 0.9 + ylim[1] <- 0 ylim[2] <- ylim[2] * 1.1 } # plot the efficient frontier line plot(x=x.f, y=y.f, ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) - # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret, pch=pch.assets) - if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) - + if(chart.assets){ + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret, pch=pch.assets) + if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + } + # plot the optimal portfolio points(opt_risk, opt_ret, col="blue", pch=16) # optimal text(x=opt_risk, y=opt_ret, labels="Optimal",col="blue", pos=4, cex=0.8) if(!is.null(rf)){ # Plot tangency line and points at risk-free rate and tangency portfolio - abline(rf, srmax, lty=2) + if(tangent.line) abline(rf, srmax, lty=2) points(0, rf, pch=16) points(x.f[idx.maxsr], y.f[idx.maxsr], pch=16) + text(x=x.f[idx.maxsr], y=y.f[idx.maxsr], labels="T", pos=4, cex=0.8) # Add lengend with max Sharpe Ratio and risk-free rate legend("topleft", paste("Max ", RAR.text, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) @@ -265,7 +278,7 @@ #' @rdname chart.Weights.EF #' @export -chart.Weights.EF.efficient.frontier <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray", legend.loc="topright"){ +chart.Weights.EF.efficient.frontier <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray", legend.loc="topright"){ # using ideas from weightsPlot.R in fPortfolio package if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") @@ -356,7 +369,7 @@ # axis labels and titles mtext(match.col, side = 3, line = 2, adj = 0.5, cex = cex.lab) mtext("Mean", side = 1, line = 2, adj = 0.5, cex = cex.lab) - mtext("Weight", side = 2, line = 2, adj = 1, cex = cex.lab) + mtext("Weight", side = 2, line = 2, adj = 0.5, cex = cex.lab) # add title title(main=main, line=3) # mtext(main, adj = 0, line = 2.5, font = 2, cex = 0.8) @@ -365,7 +378,7 @@ #' @rdname chart.Weights.EF #' @export -chart.Weights.EF.optimize.portfolio <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="EF Weights", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray", legend.loc="topright"){ +chart.Weights.EF.optimize.portfolio <- function(object, colorset=NULL, ..., n.portfolios=25, match.col="ES", main="", cex.lab=0.8, cex.axis=0.8, cex.legend=0.8, legend.labels=NULL, element.color="darkgray", legend.loc="topright"){ # chart the weights along the efficient frontier of an objected created by optimize.portfolio if(!inherits(object, "optimize.portfolio")) stop("object must be of class optimize.portfolio") @@ -380,7 +393,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, cex.legend=0.8, asset.names=TRUE, pch.assets=21){ +chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21){ if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") # get the returns and efficient frontier object @@ -412,12 +425,14 @@ # set the x and y limits if(is.null(xlim)){ xlim <- range(c(frontier[, mtc], asset_risk)) - xlim[1] <- xlim[1] * 0.8 + # xlim[1] <- xlim[1] * 0.8 + xlim[1] <- 0 xlim[2] <- xlim[2] * 1.15 } if(is.null(ylim)){ ylim <- range(c(frontier[, mean.mtc], asset_ret)) - ylim[1] <- ylim[1] * 0.9 + # ylim[1] <- ylim[1] * 0.9 + ylim[1] <- 0 ylim[2] <- ylim[2] * 1.1 } @@ -430,15 +445,18 @@ # plot the efficient frontier line plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) - # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret, pch=pch.assets) - if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + if(chart.assets){ + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret, pch=pch.assets) + if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + } if(!is.null(rf)){ # Plot tangency line and points at risk-free rate and tangency portfolio - abline(rf, srmax, lty=2) + if(tangent.line) abline(rf, srmax, lty=2) points(0, rf, pch=16) points(frontier[idx.maxsr, mtc], frontier[idx.maxsr, mean.mtc], pch=16) + text(x=frontier[idx.maxsr], y=frontier[idx.maxsr], labels="T", pos=4, cex=0.8) # Add legend with max Risk adjusted Return ratio and risk-free rate legend("topleft", paste("Max ", RAR.text, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) @@ -469,11 +487,12 @@ #' @param xlim set the x-axis limit, same as in \code{\link{plot}} #' @param ylim set the y-axis limit, same as in \code{\link{plot}} #' @param ... passthrough parameters to \code{\link{plot}} -#' @param asset.names TRUE/FALSE to include the asset names in the plot +#' @param chart.assets TRUE/FALSE to include the assets +#' @param labels.assets TRUE/FALSE to include the asset names in the plot #' @param pch.assets plotting character of the assets, same as in \code{\link{plot}} #' @author Ross Bennett #' @export -chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ..., asset.names=TRUE, pch.assets=21){ +chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ..., chart.assets=TRUE, labels.assets=TRUE, pch.assets=21){ # create multiple efficient frontier objects (one per portfolio in portfolio_list) if(!is.list(portfolio_list)) stop("portfolio_list must be passed in as a list") if(length(portfolio_list) == 1) warning("Only one portfolio object in portfolio_list") @@ -491,24 +510,29 @@ # set the x and y limits if(is.null(xlim)){ xlim <- range(asset_risk) - xlim[1] <- xlim[1] * 0.8 + # xlim[1] <- xlim[1] * 0.8 + xlim[1] <- 0 xlim[2] <- xlim[2] * 1.15 } if(is.null(ylim)){ ylim <- range(asset_ret) - ylim[1] <- ylim[1] * 0.9 + # ylim[1] <- ylim[1] * 0.9 + ylim[1] <- 0 ylim[2] <- ylim[2] * 1.1 } # plot the assets - plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, pch=pch.assets, ...) + plot(x=asset_risk, y=asset_ret, xlab=match.col, ylab="Mean", main=main, xlim=xlim, ylim=ylim, axes=FALSE, type="n", ...) axis(1, cex.axis = cex.axis, col = element.color) axis(2, cex.axis = cex.axis, col = element.color) box(col = element.color) - # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret) - if(asset.names) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + if(chart.assets){ + # risk-return scatter of the assets + points(x=asset_risk, y=asset_ret, pch=pch.assets) + if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + } + for(i in 1:length(out)){ tmp <- out[[i]] tmpfrontier <- tmp$frontier @@ -527,6 +551,7 @@ mtc <- pmatch(paste(match.col, match.col, sep='.'),cnames) } if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + # Add the efficient frontier lines to the plot lines(x=tmpfrontier[, mtc], y=tmpfrontier[, mean.mtc], col=i, lty=i, lwd=2) } if(!is.null(legend.loc)){ Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-29 17:30:09 UTC (rev 2932) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-29 19:33:20 UTC (rev 2933) @@ -15,14 +15,17 @@ ylim = NULL, cex.axis = 0.8, element.color = "darkgray", main = "Efficient Frontier", ..., rf = 0, - cex.legend = 0.8, asset.names = TRUE, pch.assets = 21) + tangent.line = TRUE, cex.legend = 0.8, + chart.assets = TRUE, labels.assets = TRUE, + pch.assets = 21) chart.EfficientFrontier.optimize.portfolio(object, match.col = "ES", n.portfolios = 25, xlim = NULL, ylim = NULL, cex.axis = 0.8, element.color = "darkgray", main = "Efficient Frontier", ..., RAR.text = "SR", - rf = 0, cex.legend = 0.8, asset.names = TRUE, + rf = 0, tangent.line = TRUE, cex.legend = 0.8, + chart.assets = TRUE, labels.assets = TRUE, pch.assets = 21) chart.EfficientFrontier.efficient.frontier(object, @@ -30,7 +33,8 @@ ylim = NULL, cex.axis = 0.8, element.color = "darkgray", main = "Efficient Frontier", ..., RAR.text = "SR", - rf = 0, cex.legend = 0.8, asset.names = TRUE, + rf = 0, tangent.line = TRUE, cex.legend = 0.8, + chart.assets = TRUE, labels.assets = TRUE, pch.assets = 21) } \arguments{ @@ -68,6 +72,8 @@ maximum Sharpe Ratio or modified Sharpe Ratio tangency portfolio will be plotted} + \item{tangent.line}{TRUE/FALSE to plot the tangent line} + \item{cex.legend}{A numerical value giving the amount by which the legend should be magnified relative to the default.} @@ -75,9 +81,12 @@ \item{RAR.text}{Risk Adjusted Return ratio text to plot in the legend} - \item{asset.names}{TRUE/FALSE to include the asset names - in the plot} + \item{chart.assets}{TRUE/FALSE to include the assets} + \item{labels.assets}{TRUE/FALSE to include the asset + names in the plot. \code{chart.assets} must be + \code{TRUE} to plot asset names} + \item{pch.assets}{plotting character of the assets, same as in \code{\link{plot}}} } Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-29 17:30:09 UTC (rev 2932) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-29 19:33:20 UTC (rev 2933) @@ -8,7 +8,8 @@ cex.axis = 0.8, element.color = "darkgray", legend.loc = NULL, legend.labels = NULL, cex.legend = 0.8, xlim = NULL, ylim = NULL, ..., - asset.names = TRUE, pch.assets = 21) + chart.assets = TRUE, labels.assets = TRUE, + pch.assets = 21) } \arguments{ \item{R}{an xts object of asset returns} @@ -59,9 +60,11 @@ \item{...}{passthrough parameters to \code{\link{plot}}} - \item{asset.names}{TRUE/FALSE to include the asset names - in the plot} + \item{chart.assets}{TRUE/FALSE to include the assets} + \item{labels.assets}{TRUE/FALSE to include the asset + names in the plot} + \item{pch.assets}{plotting character of the assets, same as in \code{\link{plot}}} } Modified: pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-29 17:30:09 UTC (rev 2932) +++ pkg/PortfolioAnalytics/man/chart.Weights.EF.Rd 2013-08-29 19:33:20 UTC (rev 2933) @@ -12,13 +12,13 @@ chart.Weights.EF.efficient.frontier(object, colorset = NULL, ..., n.portfolios = 25, - match.col = "ES", main = "EF Weights", cex.lab = 0.8, + match.col = "ES", main = "", cex.lab = 0.8, cex.axis = 0.8, cex.legend = 0.8, legend.labels = NULL, element.color = "darkgray", legend.loc = "topright") chart.Weights.EF.optimize.portfolio(object, colorset = NULL, ..., n.portfolios = 25, - match.col = "ES", main = "EF Weights", cex.lab = 0.8, + match.col = "ES", main = "", cex.lab = 0.8, cex.axis = 0.8, cex.legend = 0.8, legend.labels = NULL, element.color = "darkgray", legend.loc = "topright") } From noreply at r-forge.r-project.org Thu Aug 29 21:36:38 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Thu, 29 Aug 2013 21:36:38 +0200 (CEST) Subject: [Returnanalytics-commits] r2934 - pkg/PortfolioAnalytics/sandbox Message-ID: <20130829193638.D658D185156@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-29 21:36:38 +0200 (Thu, 29 Aug 2013) New Revision: 2934 Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R Log: adding examples to efficient frontier testing. Modified: pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-29 19:33:20 UTC (rev 2933) +++ pkg/PortfolioAnalytics/sandbox/testing_efficient_frontier.R 2013-08-29 19:36:38 UTC (rev 2934) @@ -40,14 +40,34 @@ print(meanvar.ef) summary(meanvar.ef, digits=2) meanvar.ef$frontier + # The RAR.text argument can be used for the risk-adjusted-return name on the legend, # by default it is 'Modified Sharpe Ratio' chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", RAR.text="Sharpe Ratio", pch=4) + # The tangency portfolio and line are plotted by default, these can be ommitted # by setting rf=NULL -chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", rf=NULL) +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="b", rf=NULL) + +# The tangency line can be omitted with tangent.line=FALSE. The tangent portfolio, +# risk-free rate and Sharpe Ratio are still included in the plot +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", tangent.line=FALSE) + +# The assets can be omitted with chart.assets=FALSE +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", + tangent.line=FALSE, chart.assets=FALSE) + +# Just the names of the assets can be omitted with labels.assets=FALSE and the +# plotting character can be changed with pch.assets +chart.EfficientFrontier(meanvar.ef, match.col="StdDev", type="l", + tangent.line=FALSE, labels.assets=FALSE, pch.assets=1) + chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="StdDev") +# The labels for Mean, Weight, and StdDev can be increased or decreased with +# the cex.lab argument. The default is cex.lab=0.8 +chart.Weights.EF(meanvar.ef, colorset=bluemono, match.col="StdDev", main="", cex.lab=1) + # If you have a lot of assets and they don't fit with the default legend, you # can set legend.loc=NULL and customize the plot. par(mar=c(8, 4, 4, 2)+0.1, xpd=TRUE) @@ -60,14 +80,14 @@ # The efficient frontier is created from the 'opt_meanvar' object by getting # The portfolio and returns objects and then passing those to create.EfficientFrontier -chart.EfficientFrontier(opt_meanvar, match.col="StdDev", n.portfolios=25) +chart.EfficientFrontier(opt_meanvar, match.col="StdDev", n.portfolios=25, type="l") # Rerun the optimization with a new risk aversion parameter to change where the # portfolio is along the efficient frontier. The 'optimal' portfolio plotted on # the efficient frontier is the optimal portfolio returned by optimize.portfolio. meanvar.portf$objectives[[2]]$risk_aversion=0.25 opt_meanvar <- optimize.portfolio(R=R, portfolio=meanvar.portf, optimize_method="ROI", trace=TRUE) -chart.EfficientFrontier(opt_meanvar, match.col="StdDev", n.portfolios=25) +chart.EfficientFrontier(opt_meanvar, match.col="StdDev", n.portfolios=25, type="l") # The weights along the efficient frontier can be plotted by passing in the # optimize.portfolio output object @@ -125,5 +145,6 @@ legend.labels <- c("Long Only", "Box", "Group + Long Only") chart.EfficientFrontierOverlay(R=R, portfolio_list=portf.list, type="mean-StdDev", match.col="StdDev", legend.loc="topleft", - legend.labels=legend.labels, cex.legend=0.6) + legend.labels=legend.labels, cex.legend=0.6, + labels.assets=FALSE, pch.assets=18) From noreply at r-forge.r-project.org Fri Aug 30 02:25:15 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 02:25:15 +0200 (CEST) Subject: [Returnanalytics-commits] r2935 - in pkg/PortfolioAnalytics: R man Message-ID: <20130830002515.AFB95184543@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-30 02:25:15 +0200 (Fri, 30 Aug 2013) New Revision: 2935 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd Log: Modifying a few of the graphical parameters for efficient frontier charts. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-29 19:36:38 UTC (rev 2934) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-30 00:25:15 UTC (rev 2935) @@ -52,6 +52,7 @@ #' @param labels.assets TRUE/FALSE to include the asset names in the plot. #' \code{chart.assets} must be \code{TRUE} to plot asset names #' @param pch.assets plotting character of the assets, same as in \code{\link{plot}} +#' @param cex.assets A numerical value giving the amount by which the asset points and labels should be magnified relative to the default. #' @author Ross Bennett #' @export chart.EfficientFrontier <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ...){ @@ -60,7 +61,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21){ +chart.EfficientFrontier.optimize.portfolio.ROI <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21, cex.assets=0.8){ if(!inherits(object, "optimize.portfolio.ROI")) stop("object must be of class optimize.portfolio.ROI") portf <- object$portfolio @@ -132,10 +133,13 @@ # plot the efficient frontier line plot(x=x.f, y=y.f, ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + # Add the global minimum variance or global minimum ETL portfolio + points(x=x.f[1], y=y.f[1], pch=16) + if(chart.assets){ # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret, pch=pch.assets) - if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + points(x=asset_risk, y=asset_ret, pch=pch.assets, cex=cex.assets) + if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=cex.assets) } # plot the optimal portfolio @@ -146,9 +150,9 @@ if(tangent.line) abline(rf, srmax, lty=2) points(0, rf, pch=16) points(x.f[idx.maxsr], y.f[idx.maxsr], pch=16) - text(x=x.f[idx.maxsr], y=y.f[idx.maxsr], labels="T", pos=4, cex=0.8) + # text(x=x.f[idx.maxsr], y=y.f[idx.maxsr], labels="T", pos=4, cex=0.8) # Add lengend with max Sharpe Ratio and risk-free rate - legend("topleft", paste("Max ", rar, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) + legend("topleft", paste(rar, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) } axis(1, cex.axis = cex.axis, col = element.color) @@ -158,7 +162,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21){ +chart.EfficientFrontier.optimize.portfolio <- function(object, match.col="ES", n.portfolios=25, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21, cex.assets=0.8){ # This function will work with objects of class optimize.portfolio.DEoptim, # optimize.portfolio.random, and optimize.portfolio.pso @@ -228,10 +232,13 @@ # plot the efficient frontier line plot(x=x.f, y=y.f, ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + # Add the global minimum variance or global minimum ETL portfolio + points(x=x.f[1], y=y.f[1], pch=16) + if(chart.assets){ # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret, pch=pch.assets) - if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + points(x=asset_risk, y=asset_ret, pch=pch.assets, cex=cex.assets) + if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=cex.assets) } # plot the optimal portfolio @@ -242,9 +249,9 @@ if(tangent.line) abline(rf, srmax, lty=2) points(0, rf, pch=16) points(x.f[idx.maxsr], y.f[idx.maxsr], pch=16) - text(x=x.f[idx.maxsr], y=y.f[idx.maxsr], labels="T", pos=4, cex=0.8) + # text(x=x.f[idx.maxsr], y=y.f[idx.maxsr], labels="T", pos=4, cex=0.8) # Add lengend with max Sharpe Ratio and risk-free rate - legend("topleft", paste("Max ", RAR.text, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) + legend("topleft", paste(RAR.text, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) } axis(1, cex.axis = cex.axis, col = element.color) @@ -393,7 +400,7 @@ #' @rdname chart.EfficientFrontier #' @export -chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21){ +chart.EfficientFrontier.efficient.frontier <- function(object, match.col="ES", n.portfolios=NULL, xlim=NULL, ylim=NULL, cex.axis=0.8, element.color="darkgray", main="Efficient Frontier", ..., RAR.text="SR", rf=0, tangent.line=TRUE, cex.legend=0.8, chart.assets=TRUE, labels.assets=TRUE, pch.assets=21, cex.assets=0.8){ if(!inherits(object, "efficient.frontier")) stop("object must be of class 'efficient.frontier'") # get the returns and efficient frontier object @@ -445,10 +452,13 @@ # plot the efficient frontier line plot(x=frontier[, mtc], y=frontier[, mean.mtc], ylab="Mean", xlab=match.col, main=main, xlim=xlim, ylim=ylim, axes=FALSE, ...) + # Add the global minimum variance or global minimum ETL portfolio + points(x=frontier[1, mtc], y=frontier[1, mean.mtc], pch=16) + if(chart.assets){ # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret, pch=pch.assets) - if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + points(x=asset_risk, y=asset_ret, pch=pch.assets, cex=cex.assets) + if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=cex.assets) } if(!is.null(rf)){ @@ -456,9 +466,9 @@ if(tangent.line) abline(rf, srmax, lty=2) points(0, rf, pch=16) points(frontier[idx.maxsr, mtc], frontier[idx.maxsr, mean.mtc], pch=16) - text(x=frontier[idx.maxsr], y=frontier[idx.maxsr], labels="T", pos=4, cex=0.8) + # text(x=frontier[idx.maxsr], y=frontier[idx.maxsr], labels="T", pos=4, cex=0.8) # Add legend with max Risk adjusted Return ratio and risk-free rate - legend("topleft", paste("Max ", RAR.text, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) + legend("topleft", paste(RAR.text, " = ", signif(srmax,3), sep = ""), bty = "n", cex=cex.legend) legend("topleft", inset = c(0,0.05), paste("rf = ", signif(rf,3), sep = ""), bty = "n", cex=cex.legend) } axis(1, cex.axis = cex.axis, col = element.color) @@ -529,8 +539,8 @@ if(chart.assets){ # risk-return scatter of the assets - points(x=asset_risk, y=asset_ret, pch=pch.assets) - if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=0.8) + points(x=asset_risk, y=asset_ret, pch=pch.assets, cex=cex.assets) + if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=cex.assets) } for(i in 1:length(out)){ Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-29 19:36:38 UTC (rev 2934) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontier.Rd 2013-08-30 00:25:15 UTC (rev 2935) @@ -17,7 +17,7 @@ main = "Efficient Frontier", ..., rf = 0, tangent.line = TRUE, cex.legend = 0.8, chart.assets = TRUE, labels.assets = TRUE, - pch.assets = 21) + pch.assets = 21, cex.assets = 0.8) chart.EfficientFrontier.optimize.portfolio(object, match.col = "ES", n.portfolios = 25, xlim = NULL, @@ -26,7 +26,7 @@ main = "Efficient Frontier", ..., RAR.text = "SR", rf = 0, tangent.line = TRUE, cex.legend = 0.8, chart.assets = TRUE, labels.assets = TRUE, - pch.assets = 21) + pch.assets = 21, cex.assets = 0.8) chart.EfficientFrontier.efficient.frontier(object, match.col = "ES", n.portfolios = NULL, xlim = NULL, @@ -35,7 +35,7 @@ main = "Efficient Frontier", ..., RAR.text = "SR", rf = 0, tangent.line = TRUE, cex.legend = 0.8, chart.assets = TRUE, labels.assets = TRUE, - pch.assets = 21) + pch.assets = 21, cex.assets = 0.8) } \arguments{ \item{object}{optimal portfolio created by @@ -89,6 +89,10 @@ \item{pch.assets}{plotting character of the assets, same as in \code{\link{plot}}} + + \item{cex.assets}{A numerical value giving the amount by + which the asset points and labels should be magnified + relative to the default.} } \description{ This function charts the efficient frontier and From noreply at r-forge.r-project.org Fri Aug 30 05:23:29 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 05:23:29 +0200 (CEST) Subject: [Returnanalytics-commits] r2936 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130830032329.61B541853B6@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-30 05:23:27 +0200 (Fri, 30 Aug 2013) New Revision: 2936 Added: pkg/PortfolioAnalytics/R/charts.risk.R pkg/PortfolioAnalytics/man/chart.RiskBudget.Rd Modified: pkg/PortfolioAnalytics/NAMESPACE pkg/PortfolioAnalytics/R/charts.efficient.frontier.R Log: Adding function to plot contribution and percent contribution for resulting objective_measures of risk_budget_objective. Modifying efficient frontier chart for optimize.portfolio.ROI. Modified: pkg/PortfolioAnalytics/NAMESPACE =================================================================== --- pkg/PortfolioAnalytics/NAMESPACE 2013-08-30 00:25:15 UTC (rev 2935) +++ pkg/PortfolioAnalytics/NAMESPACE 2013-08-30 03:23:27 UTC (rev 2936) @@ -8,6 +8,7 @@ export(chart.EfficientFrontier.optimize.portfolio) export(chart.EfficientFrontier) export(chart.EfficientFrontierOverlay) +export(chart.RiskBudget) export(chart.RiskReward.optimize.portfolio.DEoptim) export(chart.RiskReward.optimize.portfolio.GenSA) export(chart.RiskReward.optimize.portfolio.pso) Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-30 00:25:15 UTC (rev 2935) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-30 03:23:27 UTC (rev 2936) @@ -89,8 +89,12 @@ if(is.na(mtc)) { mtc <- pmatch(paste(match.col,match.col,sep='.'), columnames) } - if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") - opt_risk <- xtract[mtc] + if(is.na(mtc)){ + # if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") + opt_risk <- applyFUN(R=R, weights=wts, FUN=match.col) + } else { + opt_risk <- xtract[mtc] + } # get the data to plot scatter of asset returns asset_ret <- scatterFUN(R=R, FUN="mean") Added: pkg/PortfolioAnalytics/R/charts.risk.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.risk.R (rev 0) +++ pkg/PortfolioAnalytics/R/charts.risk.R 2013-08-30 03:23:27 UTC (rev 2936) @@ -0,0 +1,114 @@ + +#' Chart risk contribution or percent contribution +#' +#' This function charts the contribution or percent contribution of the resulting +#' objective measures in \code{risk_budget_objectives}. +#' +#' @param object optimal portfolio object created by \code{\link{optimize.portfolio}} +#' @param ... passthrough parameters to \code{\link{plot}} +#' @param risk.type plot risk contribution in absolute terms or percentage contribution +#' @param main main title for the chart +#' @param ylab label for the y-axis +#' @param xlab a title for the x axis: see \code{\link{title}} +#' @param cex.lab The magnification to be used for x and y labels relative to the current setting of \code{cex} +#' @param cex.axis The magnification to be used for axis annotation relative to the current setting of \code{cex} +#' @param element.color color for the default plot lines +#' @param las numeric in \{0,1,2,3\}; the style of axis labels +#' \describe{ +#' \item{0:}{always parallel to the axis [\emph{default}],} +#' \item{1:}{always horizontal,} +#' \item{2:}{always perpendicular to the axis,} +#' \item{3:}{always vertical.} +#' } +#' @param ylim set the y-axis limit, same as in \code{\link{plot}} +#' @author Ross Bennett +#' @export +chart.RiskBudget <- function(object, ..., risk.type="absolute", main="Risk Contribution", ylab="", xlab=NULL, cex.axis=0.8, cex.lab=0.8, element.color="darkgray", las=3, ylim=NULL){ + if(!inherits(object, "optimize.portfolio")) stop("object must be of class optimize.portfolio") + portfolio <- object$portfolio + # class of each objective + obj_class <- sapply(portfolio$objectives, function(x) class(x)[1]) + + if(!("risk_budget_objective" %in% obj_class)) print("no risk_budget_objective") + + # Get the index number of the risk_budget_objectives + rb_idx <- which(obj_class == "risk_budget_objective") + + if(length(rb_idx) > 1) message(paste(length(rb_idx), "risk_budget_objectives, generating multiple plots.")) + + # list to store $contribution values + contrib <- list() + + # list to store $pct_contrib values + pct_contrib <- list() + + for(i in 1:length(object$objective_measures)){ + if(length(object$objective_measures[[i]]) > 1){ + # we have an objective measure with contribution and pct_contrib + contrib[[i]] <- object$objective_measures[[i]][2] + pct_contrib[[i]] <- object$objective_measures[[i]][3] + } + } + + columnnames <- names(object$weights) + numassets <- length(columnnames) + + if(is.null(xlab)) + minmargin = 3 + else + minmargin = 5 + if(main=="") topmargin=1 else topmargin=4 + if(las > 1) {# set the bottom border to accommodate labels + bottommargin = max(c(minmargin, (strwidth(columnnames,units="in"))/par("cin")[1])) * cex.lab + if(bottommargin > 10 ) { + bottommargin<-10 + columnnames<-substr(columnnames,1,19) + # par(srt=45) #TODO figure out how to use text() and srt to rotate long labels + } + } + else { + bottommargin = minmargin + } + par(mar = c(bottommargin, 4, topmargin, 2) +.1) + + if(risk.type == "absolute"){ + for(i in 1:length(rb_idx)){ + if(is.null(ylim)){ + ylim <- range(contrib[[i]][[1]]) + ylim[1] <- min(0, ylim[1]) + ylim[2] <- ylim[2] * 1.15 + } + + # Plot values of contribution + plot(contrib[[i]][[1]], type="b", axes=FALSE, xlab='', ylim=ylim, ylab=ylab, main=main, cex.lab=cex.lab, ...) + axis(2, cex.axis = cex.axis, col = element.color) + axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) + box(col = element.color) + } + } + + if(risk.type %in% c("percent", "percentage", "pct_contrib")){ + for(i in 1:length(rb_idx)){ + min_prisk <- portfolio$objectives[[rb_idx[i]]]$min_prisk + max_prisk <- portfolio$objectives[[rb_idx[i]]]$max_prisk + if(is.null(ylim)){ + ylim <- range(c(max_prisk, pct_contrib[[i]][[1]])) + ylim[1] <- min(0, ylim[1]) + ylim[2] <- ylim[2] * 1.15 + } + + # plot percentage contribution + plot(pct_contrib[[i]][[1]], type="b", axes=FALSE, xlab='', ylim=ylim, ylab=ylab, main=main, cex.lab=cex.lab, ...) + # Check for minimum percentage risk (min_prisk) argument + if(!is.null(min_prisk)){ + points(min_prisk, type="b", col="darkgray", lty="solid", lwd=2, pch=24) + } + if(!is.null(max_prisk)){ + points(max_prisk, type="b", col="darkgray", lty="solid", lwd=2, pch=25) + } + axis(2, cex.axis = cex.axis, col = element.color) + axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) + box(col = element.color) + } + } +} Added: pkg/PortfolioAnalytics/man/chart.RiskBudget.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.RiskBudget.Rd (rev 0) +++ pkg/PortfolioAnalytics/man/chart.RiskBudget.Rd 2013-08-30 03:23:27 UTC (rev 2936) @@ -0,0 +1,51 @@ +\name{chart.RiskBudget} +\alias{chart.RiskBudget} +\title{Chart risk contribution or percent contribution} +\usage{ + chart.RiskBudget(object, ..., risk.type = "absolute", + main = "Risk Contribution", ylab = "", xlab = NULL, + cex.axis = 0.8, cex.lab = 0.8, + element.color = "darkgray", las = 3, ylim = NULL) +} +\arguments{ + \item{object}{optimal portfolio object created by + \code{\link{optimize.portfolio}}} + + \item{...}{passthrough parameters to \code{\link{plot}}} + + \item{risk.type}{plot risk contribution in absolute terms + or percentage contribution} + + \item{main}{main title for the chart} + + \item{ylab}{label for the y-axis} + + \item{xlab}{a title for the x axis: see + \code{\link{title}}} + + \item{cex.lab}{The magnification to be used for x and y + labels relative to the current setting of \code{cex}} + + \item{cex.axis}{The magnification to be used for axis + annotation relative to the current setting of \code{cex}} + + \item{element.color}{color for the default plot lines} + + \item{las}{numeric in \{0,1,2,3\}; the style of axis + labels \describe{ \item{0:}{always parallel to the axis + [\emph{default}],} \item{1:}{always horizontal,} + \item{2:}{always perpendicular to the axis,} + \item{3:}{always vertical.} }} + + \item{ylim}{set the y-axis limit, same as in + \code{\link{plot}}} +} +\description{ + This function charts the contribution or percent + contribution of the resulting objective measures in + \code{risk_budget_objectives}. +} +\author{ + Ross Bennett +} + From noreply at r-forge.r-project.org Fri Aug 30 05:44:48 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 05:44:48 +0200 (CEST) Subject: [Returnanalytics-commits] r2937 - in pkg/PortfolioAnalytics: . R man Message-ID: <20130830034448.94BEF1853B6@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-30 05:44:48 +0200 (Fri, 30 Aug 2013) New Revision: 2937 Modified: pkg/PortfolioAnalytics/DESCRIPTION pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/R/generics.R pkg/PortfolioAnalytics/man/box_constraint.Rd pkg/PortfolioAnalytics/man/constraint.Rd pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd pkg/PortfolioAnalytics/man/group_constraint.Rd Log: Changing 'seed' to 'initial' where appropriat in generic methods and constraints. Modified: pkg/PortfolioAnalytics/DESCRIPTION =================================================================== --- pkg/PortfolioAnalytics/DESCRIPTION 2013-08-30 03:23:27 UTC (rev 2936) +++ pkg/PortfolioAnalytics/DESCRIPTION 2013-08-30 03:44:48 UTC (rev 2937) @@ -53,3 +53,4 @@ 'chart.Weights.R' 'chart.RiskReward.R' 'charts.efficient.frontier.R' + 'charts.risk.R' Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-30 03:23:27 UTC (rev 2936) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-30 03:44:48 UTC (rev 2937) @@ -17,17 +17,17 @@ #' #' See main documentation in \code{\link{add.constraint}} #' -#' @param assets number of assets, or optionally a named vector of assets specifying seed weights +#' @param assets number of assets, or optionally a named vector of assets specifying initial weights #' @param ... any other passthru parameters #' @param min numeric or named vector specifying minimum weight box constraints #' @param max numeric or named vector specifying minimum weight box constraints -#' @param min_mult numeric or named vector specifying minimum multiplier box constraint from seed weight in \code{assets} -#' @param max_mult numeric or named vector specifying maximum multiplier box constraint from seed weight in \code{assets} +#' @param min_mult numeric or named vector specifying minimum multiplier box constraint from initial weight in \code{assets} +#' @param max_mult numeric or named vector specifying maximum multiplier box constraint from initial weight in \code{assets} #' @param min_sum minimum sum of all asset weights, default .99 #' @param max_sum maximum sum of all asset weights, default 1.01 #' @param weight_seq seed sequence of weights, see \code{\link{generatesequence}} #' @param type character type of the constraint to add or update -#' @param assets number of assets, or optionally a named vector of assets specifying seed weights +#' @param assets number of assets, or optionally a named vector of assets specifying initial weights #' @param ... any other passthru parameters #' @param constrclass character to name the constraint class #' @author Peter Carl, Brian G. Peterson, Ross Bennett @@ -50,7 +50,7 @@ if (length(assets) == 1) { nassets=assets #we passed in a number of assets, so we need to create the vector - message("assuming equal weighted seed portfolio") + message("assuming equal weighted initial portfolio") assets<-rep(1/nassets,nassets) } else { nassets = length(assets) @@ -65,7 +65,7 @@ if(is.character(assets)){ nassets=length(assets) assetnames=assets - message("assuming equal weighted seed portfolio") + message("assuming equal weighted initial portfolio") assets<-rep(1/nassets,nassets) names(assets)<-assetnames # set names, so that other code can access it, # and doesn't have to know about the character vector @@ -132,7 +132,7 @@ max_mult = NULL } } - ##now adjust min and max to account for min_mult and max_mult from seed + ##now adjust min and max to account for min_mult and max_mult from initial if(!is.null(min_mult) & !is.null(min)) { tmp_min <- assets*min_mult #TODO FIXME this creates a list, and it should create a named vector or matrix @@ -364,11 +364,11 @@ #' This function is called by add.constraint when type="box" is specified. see \code{\link{add.constraint}} #' #' @param type character type of the constraint -#' @param assets number of assets, or optionally a named vector of assets specifying seed weights +#' @param assets number of assets, or optionally a named vector of assets specifying initial weights #' @param min numeric or named vector specifying minimum weight box constraints #' @param max numeric or named vector specifying minimum weight box constraints -#' @param min_mult numeric or named vector specifying minimum multiplier box constraint from seed weight in \code{assets} -#' @param max_mult numeric or named vector specifying maximum multiplier box constraint from seed weight in \code{assets} +#' @param min_mult numeric or named vector specifying minimum multiplier box constraint from initial weight in \code{assets} +#' @param max_mult numeric or named vector specifying maximum multiplier box constraint from initial weight in \code{assets} #' @param enabled TRUE/FALSE #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify box constraints @@ -461,7 +461,7 @@ } } - # now adjust min and max to account for min_mult and max_mult from seed + # now adjust min and max to account for min_mult and max_mult from initial if(!is.null(min_mult) & !is.null(min)) { tmp_min <- assets * min_mult #TODO FIXME this creates a list, and it should create a named vector or matrix @@ -485,7 +485,7 @@ #' This function is called by add.constraint when type="group" is specified. see \code{\link{add.constraint}} #' #' @param type character type of the constraint -#' @param assets number of assets, or optionally a named vector of assets specifying seed weights +#' @param assets number of assets, or optionally a named vector of assets specifying initial weights #' @param groups vector specifying the groups of the assets #' @param group_labels character vector to label the groups (e.g. size, asset class, style, etc.) #' @param group_min numeric or vector specifying minimum weight group constraints @@ -916,7 +916,7 @@ #' \code{B} matrix without column names or row names. #' #' @param type character type of the constraint -#' @param assets named vector of assets specifying seed weights +#' @param assets named vector of assets specifying initial weights #' @param B vector or matrix of risk factor exposures #' @param lower vector of lower bounds of constraints for risk factor exposures #' @param upper vector of upper bounds of constraints for risk factor exposures Modified: pkg/PortfolioAnalytics/R/generics.R =================================================================== --- pkg/PortfolioAnalytics/R/generics.R 2013-08-30 03:23:27 UTC (rev 2936) +++ pkg/PortfolioAnalytics/R/generics.R 2013-08-30 03:44:48 UTC (rev 2937) @@ -181,7 +181,7 @@ cat("PortfolioAnalytics Portfolio Specification Summary", "\n") cat(rep("*", 50) ,"\n", sep="") - cat("Assets and Seed Weights:\n") + cat("Assets and Initial Weights:\n") print(object$assets) cat("\n") @@ -492,8 +492,8 @@ } cat("\n") - # get seed portfolio - cat("Portfolio Assets and Seed Weights:\n") + # get initial portfolio + cat("Portfolio Assets and Initial Weights:\n") print.default(object$portfolio$assets) cat("\n") @@ -596,7 +596,7 @@ cat("Turnover Target Constraint:\n") print(constraints$turnover_target) cat("\n") - cat("Realized turnover from seed weights:\n") + cat("Realized turnover from initial weights:\n") print(turnover(object$weights, wts.init=object$portfolio$assets)) cat("\n") Modified: pkg/PortfolioAnalytics/man/box_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/box_constraint.Rd 2013-08-30 03:23:27 UTC (rev 2936) +++ pkg/PortfolioAnalytics/man/box_constraint.Rd 2013-08-30 03:44:48 UTC (rev 2937) @@ -9,7 +9,7 @@ \item{type}{character type of the constraint} \item{assets}{number of assets, or optionally a named - vector of assets specifying seed weights} + vector of assets specifying initial weights} \item{min}{numeric or named vector specifying minimum weight box constraints} @@ -18,11 +18,11 @@ weight box constraints} \item{min_mult}{numeric or named vector specifying - minimum multiplier box constraint from seed weight in + minimum multiplier box constraint from initial weight in \code{assets}} \item{max_mult}{numeric or named vector specifying - maximum multiplier box constraint from seed weight in + maximum multiplier box constraint from initial weight in \code{assets}} \item{enabled}{TRUE/FALSE} Modified: pkg/PortfolioAnalytics/man/constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/constraint.Rd 2013-08-30 03:23:27 UTC (rev 2936) +++ pkg/PortfolioAnalytics/man/constraint.Rd 2013-08-30 03:44:48 UTC (rev 2937) @@ -13,7 +13,7 @@ } \arguments{ \item{assets}{number of assets, or optionally a named - vector of assets specifying seed weights} + vector of assets specifying initial weights} \item{...}{any other passthru parameters} @@ -24,11 +24,11 @@ weight box constraints} \item{min_mult}{numeric or named vector specifying - minimum multiplier box constraint from seed weight in + minimum multiplier box constraint from initial weight in \code{assets}} \item{max_mult}{numeric or named vector specifying - maximum multiplier box constraint from seed weight in + maximum multiplier box constraint from initial weight in \code{assets}} \item{min_sum}{minimum sum of all asset weights, default @@ -44,7 +44,7 @@ update} \item{assets}{number of assets, or optionally a named - vector of assets specifying seed weights} + vector of assets specifying initial weights} \item{...}{any other passthru parameters} Modified: pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd 2013-08-30 03:23:27 UTC (rev 2936) +++ pkg/PortfolioAnalytics/man/factor_exposure_constraint.Rd 2013-08-30 03:44:48 UTC (rev 2937) @@ -9,7 +9,7 @@ \arguments{ \item{type}{character type of the constraint} - \item{assets}{named vector of assets specifying seed + \item{assets}{named vector of assets specifying initial weights} \item{B}{vector or matrix of risk factor exposures} Modified: pkg/PortfolioAnalytics/man/group_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/group_constraint.Rd 2013-08-30 03:23:27 UTC (rev 2936) +++ pkg/PortfolioAnalytics/man/group_constraint.Rd 2013-08-30 03:44:48 UTC (rev 2937) @@ -10,7 +10,7 @@ \item{type}{character type of the constraint} \item{assets}{number of assets, or optionally a named - vector of assets specifying seed weights} + vector of assets specifying initial weights} \item{groups}{vector specifying the groups of the assets} From noreply at r-forge.r-project.org Fri Aug 30 07:11:46 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 07:11:46 +0200 (CEST) Subject: [Returnanalytics-commits] r2938 - in pkg/PortfolioAnalytics: R man Message-ID: <20130830051147.0772A1852C0@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-30 07:11:46 +0200 (Fri, 30 Aug 2013) New Revision: 2938 Modified: pkg/PortfolioAnalytics/R/charts.risk.R pkg/PortfolioAnalytics/man/chart.RiskBudget.Rd Log: Modifying chart.RiskBudget to plot neighbor portfolios. Modified: pkg/PortfolioAnalytics/R/charts.risk.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.risk.R 2013-08-30 03:44:48 UTC (rev 2937) +++ pkg/PortfolioAnalytics/R/charts.risk.R 2013-08-30 05:11:46 UTC (rev 2938) @@ -4,7 +4,17 @@ #' This function charts the contribution or percent contribution of the resulting #' objective measures in \code{risk_budget_objectives}. #' +#' \code{neighbors} may be specified in three ways. +#' The first is as a single number of neighbors. This will extract the \code{neighbors} closest +#' portfolios in terms of the \code{out} numerical statistic. +#' The second method consists of a numeric vector for \code{neighbors}. +#' This will extract the \code{neighbors} with portfolio index numbers that correspond to the vector contents. +#' The third method for specifying \code{neighbors} is to pass in a matrix. +#' This matrix should look like the output of \code{\link{extractStats}}, and should contain +#' properly named contribution and pct_contrib columns. +#' #' @param object optimal portfolio object created by \code{\link{optimize.portfolio}} +#' @param neighbors risk contribution or pct_contrib of neighbor portfolios to be plotted #' @param ... passthrough parameters to \code{\link{plot}} #' @param risk.type plot risk contribution in absolute terms or percentage contribution #' @param main main title for the chart @@ -23,7 +33,7 @@ #' @param ylim set the y-axis limit, same as in \code{\link{plot}} #' @author Ross Bennett #' @export -chart.RiskBudget <- function(object, ..., risk.type="absolute", main="Risk Contribution", ylab="", xlab=NULL, cex.axis=0.8, cex.lab=0.8, element.color="darkgray", las=3, ylim=NULL){ +chart.RiskBudget <- function(object, neighbors=NULL, ..., risk.type="absolute", main="Risk Contribution", ylab="", xlab=NULL, cex.axis=0.8, cex.lab=0.8, element.color="darkgray", las=3, ylim=NULL){ if(!inherits(object, "optimize.portfolio")) stop("object must be of class optimize.portfolio") portfolio <- object$portfolio # class of each objective @@ -72,33 +82,63 @@ par(mar = c(bottommargin, 4, topmargin, 2) +.1) if(risk.type == "absolute"){ - for(i in 1:length(rb_idx)){ + for(ii in 1:length(rb_idx)){ if(is.null(ylim)){ - ylim <- range(contrib[[i]][[1]]) + ylim <- range(contrib[[ii]][[1]]) ylim[1] <- min(0, ylim[1]) ylim[2] <- ylim[2] * 1.15 } + objname <- portfolio$objectives[[rb_idx[i]]]$name + # Plot values of contribution + plot(contrib[[ii]][[1]], type="n", axes=FALSE, xlab="", ylim=ylim, ylab=paste(objname, "Contribution", sep=" "), main=main, cex.lab=cex.lab, ...) - # Plot values of contribution - plot(contrib[[i]][[1]], type="b", axes=FALSE, xlab='', ylim=ylim, ylab=ylab, main=main, cex.lab=cex.lab, ...) + # neighbors needs to be in the loop if there is more than one risk_budget_objective + if(!is.null(neighbors)){ + if(is.vector(neighbors)){ + xtract <- extractStats(object) + riskcols <- grep(paste(objname, "contribution", sep="."), colnames(xtract)) + if(length(riskcols) == 0) stop("Could not extract risk column") + if(length(neighbors) == 1){ + # overplot nearby portfolios defined by 'out' + orderx <- order(xtract[,"out"]) + subsetx <- head(xtract[orderx,], n=neighbors) + for(i in 1:neighbors) points(subsetx[i, riskcols], type="b", col="lightblue") + } else { + # assume we have a vector of portfolio numbers + subsetx <- xtract[neighbors, riskcols] + for(i in 1:length(neighbors)) points(subsetx[i,], type="b", col="lightblue") + } + } # end if neighbors is a vector + if(is.matrix(neighbors) | is.data.frame(neighbors)){ + # the user has likely passed in a matrix containing calculated values for contrib or pct_contrib + nbriskcol <- grep(paste(objname, "contribution", sep="."), colnames(neighbors)) + if(length(nbriskcol) == 0) stop(paste("must have '", objname,".contribution' as column name in neighbors",sep="")) + if(length(nbriskcol) != numassets) stop("number of 'contribution' columns must equal number of assets") + for(i in 1:nrow(neighbors)) points(as.numeric(neighbors[i, nbriskcol]), type="b", col="lightblue") + # note that here we need to get risk cols separately from the matrix, not from xtract + # also note the need for as.numeric. points() doesn't like matrix inputs + } # end neighbors plot for matrix or data.frame + } # end if neighbors is not null + points(contrib[[ii]][[1]], type="b", ...) axis(2, cex.axis = cex.axis, col = element.color) axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) box(col = element.color) - } - } + } # end for loop of risk_budget_objective + } # end plot for absolute risk.type if(risk.type %in% c("percent", "percentage", "pct_contrib")){ - for(i in 1:length(rb_idx)){ - min_prisk <- portfolio$objectives[[rb_idx[i]]]$min_prisk - max_prisk <- portfolio$objectives[[rb_idx[i]]]$max_prisk + for(ii in 1:length(rb_idx)){ + min_prisk <- portfolio$objectives[[rb_idx[ii]]]$min_prisk + max_prisk <- portfolio$objectives[[rb_idx[ii]]]$max_prisk if(is.null(ylim)){ - ylim <- range(c(max_prisk, pct_contrib[[i]][[1]])) - ylim[1] <- min(0, ylim[1]) - ylim[2] <- ylim[2] * 1.15 + #ylim <- range(c(max_prisk, pct_contrib[[i]][[1]])) + #ylim[1] <- min(0, ylim[1]) + #ylim[2] <- ylim[2] * 1.15 + ylim <- c(0, 1) } - + objname <- portfolio$objectives[[rb_idx[i]]]$name # plot percentage contribution - plot(pct_contrib[[i]][[1]], type="b", axes=FALSE, xlab='', ylim=ylim, ylab=ylab, main=main, cex.lab=cex.lab, ...) + plot(pct_contrib[[ii]][[1]], type="n", axes=FALSE, xlab='', ylim=ylim, ylab=paste(objname, " % Contribution", sep=" "), main=main, cex.lab=cex.lab, ...) # Check for minimum percentage risk (min_prisk) argument if(!is.null(min_prisk)){ points(min_prisk, type="b", col="darkgray", lty="solid", lwd=2, pch=24) @@ -106,9 +146,44 @@ if(!is.null(max_prisk)){ points(max_prisk, type="b", col="darkgray", lty="solid", lwd=2, pch=25) } + + # neighbors needs to be in the loop if there is more than one risk_budget_objective + if(!is.null(neighbors)){ + if(is.vector(neighbors)){ + xtract <- extractStats(object) + if(risk.type == "absolute"){ + riskcols <- grep(paste(objname, "contribution", sep="."), colnames(xtract)) + } else if(risk.type %in% c("percent", "percentage", "pct_contrib")){ + riskcols <- grep(paste(objname, "pct_contrib", sep="."), colnames(xtract)) + } + if(length(riskcols) == 0) stop("Could not extract risk column") + if(length(neighbors) == 1){ + # overplot nearby portfolios defined by 'out' + orderx <- order(xtract[,"out"]) + subsetx <- head(xtract[orderx,], n=neighbors) + for(i in 1:neighbors) points(subsetx[i, riskcols], type="b", col="lightblue") + } else { + # assume we have a vector of portfolio numbers + subsetx <- xtract[neighbors, riskcols] + for(i in 1:length(neighbors)) points(subsetx[i,], type="b", col="lightblue") + } + } # end if neighbors is a vector + if(is.matrix(neighbors) | is.data.frame(neighbors)){ + # the user has likely passed in a matrix containing calculated values for contrib or pct_contrib + nbriskcol <- grep(paste(objname, "pct_contrib", sep="."), colnames(neighbors)) + if(length(nbriskcol) == 0) stop(paste("must have '", objname,".pct_contrib' as column name in neighbors",sep="")) + if(length(nbriskcol) != numassets) stop("number of 'pct_contrib' columns must equal number of assets") + for(i in 1:nrow(neighbors)) points(as.numeric(neighbors[i, nbriskcol]), type="b", col="lightblue") + # note that here we need to get risk cols separately from the matrix, not from xtract + # also note the need for as.numeric. points() doesn't like matrix inputs + } # end neighbors plot for matrix or data.frame + } # end if neighbors is not null + points(pct_contrib[[ii]][[1]], type="b", ...) axis(2, cex.axis = cex.axis, col = element.color) axis(1, labels=columnnames, at=1:numassets, las=las, cex.axis = cex.axis, col = element.color) box(col = element.color) - } - } + } # end for loop of risk_budget_objective + } # end plot for pct_contrib risk.type + + } Modified: pkg/PortfolioAnalytics/man/chart.RiskBudget.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.RiskBudget.Rd 2013-08-30 03:44:48 UTC (rev 2937) +++ pkg/PortfolioAnalytics/man/chart.RiskBudget.Rd 2013-08-30 05:11:46 UTC (rev 2938) @@ -2,15 +2,18 @@ \alias{chart.RiskBudget} \title{Chart risk contribution or percent contribution} \usage{ - chart.RiskBudget(object, ..., risk.type = "absolute", - main = "Risk Contribution", ylab = "", xlab = NULL, - cex.axis = 0.8, cex.lab = 0.8, + chart.RiskBudget(object, neighbors = NULL, ..., + risk.type = "absolute", main = "Risk Contribution", + ylab = "", xlab = NULL, cex.axis = 0.8, cex.lab = 0.8, element.color = "darkgray", las = 3, ylim = NULL) } \arguments{ \item{object}{optimal portfolio object created by \code{\link{optimize.portfolio}}} + \item{neighbors}{risk contribution or pct_contrib of + neighbor portfolios to be plotted} + \item{...}{passthrough parameters to \code{\link{plot}}} \item{risk.type}{plot risk contribution in absolute terms @@ -45,6 +48,19 @@ contribution of the resulting objective measures in \code{risk_budget_objectives}. } +\details{ + \code{neighbors} may be specified in three ways. The + first is as a single number of neighbors. This will + extract the \code{neighbors} closest portfolios in terms + of the \code{out} numerical statistic. The second method + consists of a numeric vector for \code{neighbors}. This + will extract the \code{neighbors} with portfolio index + numbers that correspond to the vector contents. The third + method for specifying \code{neighbors} is to pass in a + matrix. This matrix should look like the output of + \code{\link{extractStats}}, and should contain properly + named contribution and pct_contrib columns. +} \author{ Ross Bennett } From noreply at r-forge.r-project.org Fri Aug 30 10:01:57 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 10:01:57 +0200 (CEST) Subject: [Returnanalytics-commits] r2939 - in pkg/PerformanceAnalytics/sandbox/pulkit: R man vignettes Message-ID: <20130830080158.19EA2185C5D@r-forge.r-project.org> Author: pulkit Date: 2013-08-30 10:01:57 +0200 (Fri, 30 Aug 2013) New Revision: 2939 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw Log: latex changes Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-30 05:11:46 UTC (rev 2938) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-30 08:01:57 UTC (rev 2939) @@ -5,7 +5,6 @@ #'The Economic Drawdown Controlled Optimal Portfolio Strategy(EDD-COPS) has #'the portfolio fraction allocated to single risky asset as: #' -#' \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-EDD(t)}{1-EDD(t)}\biggr]\right\}} #' #' The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. #'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-30 05:11:46 UTC (rev 2938) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-30 08:01:57 UTC (rev 2939) @@ -5,17 +5,11 @@ #'The Rolling Economic Drawdown Controlled Optimal Portfolio Strategy(REDD-COPS) has #'the portfolio fraction allocated to single risky asset as: #' -#' \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} #' #' The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. #' #' For two risky assets in REDD-COPS,dynamic asset allocation weights are : #' -#'\deqn{\left[\begin{array}{c} x_1 \\ x_2 \end{array}\right] = \frac{1}{1-{\rho}^2} -#' \left[\begin{array}{c} (\lambda_1 + {1/2}\sigma_1 - \rho.(\lambda_2 + {1/2}\sigma_2 -#' )/\sigma_1) \\ (\lambda_1 + {1/2}\sigma_1 - \rho(\lambda_2 + {1/2}\sigma_2)/\sigma_ -#' 1) \end{array}\right] Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta -#' .\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} #' #'The portion of the risk free asset is \eqn{x_f = 1 - x_1 - x_2}. #'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-30 05:11:46 UTC (rev 2938) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REM.R 2013-08-30 08:01:57 UTC (rev 2939) @@ -5,7 +5,7 @@ #'Rolling Economic Max at time t, looking back at portfolio Wealth history #'for a rolling window of length H is given by: #' -#'\deqn{REM(t,h)=\max_{t-H \leq s}\[(1+r_f)^{t-s}W_s\]} +#'\deqn{REM(t,h)=\max_{t-H \leq s}[(1+r_f)^{t-s}W_s]} #' #'Here rf is the average realized risk free rate over a period of length t-s. If the risk free rate is changing. This is used to compound. #' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-30 05:11:46 UTC (rev 2938) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-30 08:01:57 UTC (rev 2939) @@ -27,9 +27,6 @@ Strategy(EDD-COPS) has the portfolio fraction allocated to single risky asset as: - \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + - 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-EDD(t)}{1-EDD(t)}\biggr]\right\}} - The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-30 05:11:46 UTC (rev 2938) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-30 08:01:57 UTC (rev 2939) @@ -35,24 +35,12 @@ Portfolio Strategy(REDD-COPS) has the portfolio fraction allocated to single risky asset as: - \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + - 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} - The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. For two risky assets in REDD-COPS,dynamic asset allocation weights are : - \deqn{\left[\begin{array}{c} x_1 \\ x_2 - \end{array}\right] = \frac{1}{1-{\rho}^2} - \left[\begin{array}{c} (\lambda_1 + {1/2}\sigma_1 - - \rho.(\lambda_2 + {1/2}\sigma_2 )/\sigma_1) \\ (\lambda_1 - + {1/2}\sigma_1 - \rho(\lambda_2 + {1/2}\sigma_2)/\sigma_ - 1) \end{array}\right] - Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta - .\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} - The portion of the risk free asset is \eqn{x_f = 1 - x_1 - x_2}. dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd 2013-08-30 05:11:46 UTC (rev 2938) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/rollEconomicMax.Rd 2013-08-30 08:01:57 UTC (rev 2939) @@ -24,7 +24,7 @@ Wealth history for a rolling window of length H is given by: - \deqn{REM(t,h)=\max_{t-H \leq s}\[(1+r_f)^{t-s}W_s\]} + \deqn{REM(t,h)=\max_{t-H \leq s}[(1+r_f)^{t-s}W_s]} Here rf is the average realized risk free rate over a period of length t-s. If the risk free rate is changing. Modified: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw 2013-08-30 05:11:46 UTC (rev 2938) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw 2013-08-30 08:01:57 UTC (rev 2939) @@ -90,15 +90,11 @@ The Rolling Economic Drawdown Controlled Optimal Portfolio Strategy(REDD-COPS) has the portfolio fraction allocated to single risky asset as: -\deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} - + The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. For two risky assets in REDD-COPS,dynamic asset allocation weights are : - -\deqn{\left[\begin{array}{c} x_1 \\ x_2 \end{array}\right] = \frac{1}{1-{\rho}^2} -\left[\begin{array}{c} (\lambda_1 + {1/2}\sigma_1 - \rho.(\lambda_2 + {1/2}\sigma_2)/\sigma_1) \\ (\lambda_1 + {1/2}\sigma_1 - \rho(\lambda_2 + {1/2}\sigma_2)/\sigma_1) \end{array}\right] Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} - + The portion of the risk free asset is \eqn{x_f = 1 - x_1 - x_2}. \subsection{Usage} From noreply at r-forge.r-project.org Fri Aug 30 10:39:38 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 10:39:38 +0200 (CEST) Subject: [Returnanalytics-commits] r2940 - in pkg/PerformanceAnalytics/sandbox/pulkit: R man Message-ID: <20130830083938.7B7181801E0@r-forge.r-project.org> Author: pulkit Date: 2013-08-30 10:39:38 +0200 (Fri, 30 Aug 2013) New Revision: 2940 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd Log: latex changes Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-30 08:01:57 UTC (rev 2939) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/EDDCOPS.R 2013-08-30 08:39:38 UTC (rev 2940) @@ -4,7 +4,7 @@ #'@description #'The Economic Drawdown Controlled Optimal Portfolio Strategy(EDD-COPS) has #'the portfolio fraction allocated to single risky asset as: -#' +#' \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-EDD(t)}{1-EDD(t)}\biggr]\right\}} #' #' The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. #'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-30 08:01:57 UTC (rev 2939) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/REDDCOPS.R 2013-08-30 08:39:38 UTC (rev 2940) @@ -5,11 +5,17 @@ #'The Rolling Economic Drawdown Controlled Optimal Portfolio Strategy(REDD-COPS) has #'the portfolio fraction allocated to single risky asset as: #' -#' +#' \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} +# #' The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. #' #' For two risky assets in REDD-COPS,dynamic asset allocation weights are : -#' +#'\deqn{\left[\begin{array}{c} x_1 \\ x_2 \end{array}\right] = \frac{1}{1-{\rho}^2} +#' \left[\begin{array}{c} (\lambda_1 + {1/2}\sigma_1 - \rho.(\lambda_2 + {1/2}\sigma_2 +#' )/\sigma_1) \\ (\lambda_1 + {1/2}\sigma_1 - \rho(\lambda_2 + {1/2}\sigma_2)/\sigma_ +#' 1) \end{array}\right]} + + #' #'The portion of the risk free asset is \eqn{x_f = 1 - x_1 - x_2}. #'dt<-read.zoo("../data/ret.csv",sep=",",header = TRUE) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-30 08:01:57 UTC (rev 2939) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/EDDCOPS.Rd 2013-08-30 08:39:38 UTC (rev 2940) @@ -25,7 +25,9 @@ \description{ The Economic Drawdown Controlled Optimal Portfolio Strategy(EDD-COPS) has the portfolio fraction allocated - to single risky asset as: + to single risky asset as: \deqn{x_t = + Max\left\{0,\biggl(\frac{\lambda/\sigma + + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-EDD(t)}{1-EDD(t)}\biggr]\right\}} The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-30 08:01:57 UTC (rev 2939) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/REDDCOPS.Rd 2013-08-30 08:39:38 UTC (rev 2940) @@ -35,11 +35,18 @@ Portfolio Strategy(REDD-COPS) has the portfolio fraction allocated to single risky asset as: + \deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-REDD(t,h)}{1-REDD(t,h)}\biggr]\right\}} The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. For two risky assets in REDD-COPS,dynamic asset - allocation weights are : + allocation weights are : \deqn{\left[\begin{array}{c} x_1 + \\ x_2 \end{array}\right] = \frac{1}{1-{\rho}^2} + \left[\begin{array}{c} (\lambda_1 + {1/2}\sigma_1 - + \rho.(\lambda_2 + {1/2}\sigma_2 )/\sigma_1) \\ (\lambda_1 + + {1/2}\sigma_1 - \rho(\lambda_2 + {1/2}\sigma_2)/\sigma_ + 1) \end{array}\right]} The portion of the risk free asset is \eqn{x_f = 1 - x_1 - x_2}. dt<-read.zoo("../data/ret.csv",sep=",",header = From noreply at r-forge.r-project.org Fri Aug 30 10:41:18 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 10:41:18 +0200 (CEST) Subject: [Returnanalytics-commits] r2941 - in pkg/Meucci: . R demo man Message-ID: <20130830084118.1EEE31806A3@r-forge.r-project.org> Author: xavierv Date: 2013-08-30 10:41:17 +0200 (Fri, 30 Aug 2013) New Revision: 2941 Added: pkg/Meucci/R/MaxRsqTS.R pkg/Meucci/demo/S_TimeSeriesConstrainedIndustries.R pkg/Meucci/man/MaxRsqTS.Rd Modified: pkg/Meucci/DESCRIPTION pkg/Meucci/NAMESPACE Log: - added S_TimeSeriesConstrainedIndustries demo script from chapter 3 and its associated functions Modified: pkg/Meucci/DESCRIPTION =================================================================== --- pkg/Meucci/DESCRIPTION 2013-08-30 08:39:38 UTC (rev 2940) +++ pkg/Meucci/DESCRIPTION 2013-08-30 08:41:17 UTC (rev 2941) @@ -96,7 +96,8 @@ 'FitOrnsteinUhlenbeck.R' 'PlotVolVsCompositionEfficientFrontier.R' 'BlackLittermanFormula.R' + 'Log2Lin.R' + 'PlotCompositionEfficientFrontier.R' ' FitOrnsteinUhlenbeck.R' - 'Log2Lin.R' - 'PlotCompositionEfficientFrontier.R' + 'MaxRsqTS.R' Modified: pkg/Meucci/NAMESPACE =================================================================== --- pkg/Meucci/NAMESPACE 2013-08-30 08:39:38 UTC (rev 2940) +++ pkg/Meucci/NAMESPACE 2013-08-30 08:41:17 UTC (rev 2941) @@ -31,6 +31,7 @@ export(LognormalMoments2Parameters) export(LognormalParam2Statistics) export(MaxRsqCS) +export(MaxRsqTS) export(MleRecursionForStudentT) export(MvnRnd) export(NoisyObservations) Added: pkg/Meucci/R/MaxRsqTS.R =================================================================== --- pkg/Meucci/R/MaxRsqTS.R (rev 0) +++ pkg/Meucci/R/MaxRsqTS.R 2013-08-30 08:41:17 UTC (rev 2941) @@ -0,0 +1,126 @@ +#' Solve for B that maximises sample r-square of F'*B' with X under constraints A*G<=D +#' and Aeq*G=Deq (A,D, Aeq,Deq conformable matrices),as described in A. Meucci, +#' "Risk and Asset Allocation", Springer, 2005. +#' +#' @param X : [matrix] (T x N) +#' @param F : [matrix] (T x K) +#' @param W : [matrix] (N x N) +#' @param A : [matrix] linear inequality constraints +#' @param D : [matrix] +#' @param Aeq : [matrix] linear equality constraints +#' @param Deq : [matrix] +#' @param lb : [vector] lower bound +#' @param ub : [vector] upper bound +#' +#' @return B : [matrix] (N x K) +#' +#' @note +#' Initial code by Tai-Ho Wang +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "MaxRsqTS.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} +#' @export + +MaxRsqTS = function(X, F, W, A = NULL, D = NULL, Aeq = NULL, Deq, lb = NULL, ub = NULL) +{ + + N = dim(X)[ 2 ]; + K = dim(F)[ 2 ]; + X = matrix(as.numeric(X), dim(X)) + + + # compute sample estimates + # E_X = apply( X, 2, mean); + # E_F = apply( F, 2, mean); + XF = cbind( X, F ); + SigmaJoint_XF = (dim(XF)[1]-1)/dim(XF)[1] * cov(XF); + + Sigma_X = SigmaJoint_XF[ 1:N, 1:N ]; + Sigma_XF = SigmaJoint_XF[ 1:N, (N+1):(N+K) ]; + Sigma_F = SigmaJoint_XF[ (N+1):(N+K), (N+1):(N+K) ]; + + + # restructure for feeding to quadprog + Phi = t(W) %*% W; + trSigma_WX = sum( diag( Sigma_X %*% Phi ) ); + + # restructure the linear term of the objective function + FirstDegree = ( -1 / trSigma_WX ) * matrix( t( Phi %*% Sigma_XF ), N*K, ); + + # restructure the quadratic term of the objective function + SecondDegree = Sigma_F; + for( k in 1 : (N - 1) ) + { + SecondDegree = blkdiag( SecondDegree, Sigma_F ); + } + SecondDegree = ( 1 / trSigma_WX ) * t(kron( sqrt( Phi ), diag( 1, K ))) %*% SecondDegree %*% kron( sqrt(Phi), diag( 1, K ) ); + + # restructure the equality constraints + if( !length(Aeq) ) + { + AEq = Aeq; + }else + { + AEq = NULL; + for( k in 1 : N ) + { + AEq = cbind( AEq, kron( diag( 1, K ), Aeq[k] ) ); + } + } + + Deq = matrix( Deq, , 1); + + # resturcture the inequality constraints + if( length(A) ) + { + AA = NULL + for( k in 1 : N ) + { + AA = cbind( AA, kron(diag( 1, K ), A[ k ] ) ); ##ok + } + }else + { + AA = A; + } + + if( length(D)) + { + D = matrix( D, , 1 ); + } + + # restructure upper and lower bounds + if( length(lb) ) + { + lb = matrix( lb, K * N, 1 ); + } + + if( length(ub) ) + { + ub = matrix( ub, K * N, 1 ); + } + + # initial guess + x0 = matrix( 1, K * N, 1 ); + if(length(AA)) + { + AA = ( AA + t(AA) ) / 2; # robustify + + } + + Amat = rbind( AEq, AA); + bvec = c( Deq, D ); + + # solve the constrained generlized r-square problem by quadprog + #options = optimset('LargeScale', 'off', 'MaxIter', 2000, 'Display', 'none'); + + + b = ipop( c = matrix( FirstDegree ), H = SecondDegree, A = Amat, b = bvec, l = lb , u = ub , r = rep(0, length(bvec)) ) + + # reshape for output + G = matrix( attributes(b)$primal, N, ) ; + + return( G ); +} \ No newline at end of file Added: pkg/Meucci/demo/S_TimeSeriesConstrainedIndustries.R =================================================================== --- pkg/Meucci/demo/S_TimeSeriesConstrainedIndustries.R (rev 0) +++ pkg/Meucci/demo/S_TimeSeriesConstrainedIndustries.R 2013-08-30 08:41:17 UTC (rev 2941) @@ -0,0 +1,81 @@ +#' This script fits a time-series linear factor computing the industry factors loadings, where the loadings are +#' bounded and constrained to yield unit exposure, as described in A. Meucci, "Risk and Asset Allocation", +#' Springer, 2005, Chapter 3. +#' +#' @references +#' \url{http://symmys.com/node/170} +#' See Meucci's script for "S_TimeSeriesConstrainedIndustries.m" +#' +#' @author Xavier Valls \email{flamejat@@gmail.com} + +################################################################################################################## +### Loads weekly stock returns X and indices stock returns F +load("../data/securitiesTS.Rda"); +Data_Securities = securitiesTS$data[ , -1 ]; # 1st column is date + +load("../data/sectorsTS.Rda"); +Data_Sectors = sectorsTS$data[ , -(1:2) ]; #1st column is date, 2nd column is SPX + +################################################################################################################## +### Estimation +# linear returns for stocks +X = diff( Data_Securities ) / Data_Securities[ -nrow(Data_Securities), ]; +X = X[ ,1:20 ]; # consider first stocks only to lower run time (comment this line otherwise) + +# linear return for the factors +F = diff(Data_Sectors) / Data_Sectors[ -nrow(Data_Sectors) , ]; + +T = dim(X)[1]; +N = dim(X)[2]; +K = dim(F)[ 2 ]; + +################################################################################################################## +# Solve for B that maximises sample r-square of F'*B' with X +# under constraints A*B<=D and Aeq*B=Deq (A,D, Aeq,Deq conformable matrices) +W = diag( 1, N ); +A = NULL; +D = NULL; +Aeq = matrix( 1, K, N ) / N +Deq = matrix( 1, K, 1 ); +lb = 0.8 +ub = 1.2 +B = MaxRsqTS(X, F, W, A, D, Aeq, Deq, lb, ub); + +# compute sample estimates +E_X = matrix( apply(X, 2, mean ) ); +E_F = matrix( apply( F, 2, mean ) ); +XF = cbind( X, F ); +SigmaJoint_XF = (dim(XF)[1]-1)/dim(XF)[1] * cov(XF); + +Sigma_X = SigmaJoint_XF[ 1:N, 1:N ]; +Sigma_XF = SigmaJoint_XF[ 1:N, (N+1):(N+K) ]; +Sigma_F = SigmaJoint_XF[ (N+1):(N+K), (N+1):(N+K) ]; + +# compute intercept a and residual U +a = E_X - B %*% E_F; +U = X - repmat(t(a), T, 1) - F %*% t(B); + + +################################################################################################################## +### Residual analysis + +M = cbind( U, F ); +SigmaJoint_UF = (dim(M)[1]-1)/dim(M)[1] * cov( M ); +CorrJoint_UF = cov2cor(SigmaJoint_UF); + +# correlation between residuals is not null +Corr_U = tril(CorrJoint_UF[ 1:N, 1:N ], -1); +Corr_U = Corr_U[ Corr_U != 0 ]; +mean_Corr_U = mean( abs(Corr_U)); +max_Corr_U = max( abs(Corr_U)); + +dev.new(); +hist(Corr_U, 100); + +# correlation between residuals and factors is not null +Corr_UF = CorrJoint_UF[ 1:N, (N+1):(N+K) ]; +mean_Corr_UF = mean( abs(Corr_UF ) ); +max_Corr_UF = max( abs(Corr_UF ) ); + +dev.new(); +hist(Corr_UF, 100); \ No newline at end of file Added: pkg/Meucci/man/MaxRsqTS.Rd =================================================================== --- pkg/Meucci/man/MaxRsqTS.Rd (rev 0) +++ pkg/Meucci/man/MaxRsqTS.Rd 2013-08-30 08:41:17 UTC (rev 2941) @@ -0,0 +1,48 @@ +\name{MaxRsqTS} +\alias{MaxRsqTS} +\title{Solve for B that maximises sample r-square of F'*B' with X under constraints A*G<=D +and Aeq*G=Deq (A,D, Aeq,Deq conformable matrices),as described in A. Meucci, +"Risk and Asset Allocation", Springer, 2005.} +\usage{ + MaxRsqTS(X, F, W, A = NULL, D = NULL, Aeq = NULL, Deq, + lb = NULL, ub = NULL) +} +\arguments{ + \item{X}{: [matrix] (T x N)} + + \item{F}{: [matrix] (T x K)} + + \item{W}{: [matrix] (N x N)} + + \item{A}{: [matrix] linear inequality constraints} + + \item{D}{: [matrix]} + + \item{Aeq}{: [matrix] linear equality constraints} + + \item{Deq}{: [matrix]} + + \item{lb}{: [vector] lower bound} + + \item{ub}{: [vector] upper bound} +} +\value{ + B : [matrix] (N x K) +} +\description{ + Solve for B that maximises sample r-square of F'*B' with + X under constraints A*G<=D and Aeq*G=Deq (A,D, Aeq,Deq + conformable matrices),as described in A. Meucci, "Risk + and Asset Allocation", Springer, 2005. +} +\note{ + Initial code by Tai-Ho Wang +} +\author{ + Xavier Valls \email{flamejat at gmail.com} +} +\references{ + \url{http://symmys.com/node/170} See Meucci's script for + "MaxRsqTS.m" +} + From noreply at r-forge.r-project.org Fri Aug 30 12:29:17 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 12:29:17 +0200 (CEST) Subject: [Returnanalytics-commits] r2942 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: R man Message-ID: <20130830102917.A55871801E0@r-forge.r-project.org> Author: shubhanm Date: 2013-08-30 12:29:17 +0200 (Fri, 30 Aug 2013) New Revision: 2942 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.UnsmoothReturn.Rd Log: /R code modification and addition of features Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-30 08:41:17 UTC (rev 2941) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-30 10:29:17 UTC (rev 2942) @@ -20,7 +20,7 @@ #' @rdname AcarSim #' @export AcarSim <- - function() + function(R) { library(PerformanceAnalytics) @@ -30,13 +30,16 @@ R = checkData(edhec, method="xts") # Get dimensions and labels # simulated parameters using edhec data -mu=mean(Return.annualized(edhec)) +mu=mean(Return.annualized(R)) monthly=(1+mu)^(1/12)-1 -sig=StdDev.annualized(edhec[,1])[1]; + vol = as.numeric(StdDev.annualized(R)); + ret=as.numeric(Return.annualized(R)) + drawdown =as.numeric(maxDrawdown(R)) + sig=mean(StdDev.annualized(R)); T= 36 j=1 dt=1/T -nsim=6000; +nsim=1; thres=4; r=matrix(0,nsim,T+1) monthly = 0 @@ -77,9 +80,10 @@ lines(((fddown[,2])/(sig*nsim)),type='o',col="pink") lines(((fddown[,3])/(sig*nsim)),type='o',col="green") lines(((fddown[,4])/(sig*nsim)),type='o',col="red") -legend(32,-4, c("%99", "%95", "%90","%85"), col = c("blue","pink","green","red"), text.col= "black", - lty = c(2, -1, 1), pch = c(-1, 3, 4), merge = TRUE, bg='gray90') - + points((ret/vol), (-drawdown/vol), col = "black", pch=10) + legend(32,-4, c("%99", "%95", "%90","%85","Fund"), col = c("blue","pink","green","red","black"), text.col= "black", + lty = c(2, -1, 1,2), pch = c(-1, 3, 4,10), merge = TRUE, bg='gray90') + title("Maximum Drawdown/Volatility as a function of Return/Volatility 36 monthly returns simulated 6,000 times") } Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R 2013-08-30 08:41:17 UTC (rev 2941) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R 2013-08-30 10:29:17 UTC (rev 2942) @@ -21,7 +21,7 @@ #' @param q number of lag factors for autocorrelation #' @references Okunev, John and White, Derek R., \emph{ Hedge Fund Risk Factors and Value at Risk of Credit Trading Strategies} (October 2003). #' Available at SSRN: \url{http://ssrn.com/abstract=460641} -#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @author Shubhankit Mohan #' @seealso \code{\link{Return.Geltner}} \cr #' @keywords ts multivariate distribution models #' @examples @@ -33,8 +33,12 @@ #' @export Return.Okunev<-function(R,q=3) { + column.okunev=R - column.okunev <- column.okunev[!is.na(column.okunev)] + col=ncol(R) + for(j in 1:col){ + column.okunev[,j] <- column.okunev[,j][!is.na(column.okunev[,j])] +} for(i in 1:q) { lagR = lag(column.okunev, k=i) @@ -42,7 +46,8 @@ } return(c(column.okunev)) } -#' Recusrsive Okunev Call Function +# Recusrsive Okunev Call Function + quad <- function(R,d) { coeff = as.numeric(acf(as.numeric(edhec[,1]), plot = FALSE)[1:2][[1]]) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.UnsmoothReturn.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.UnsmoothReturn.R 2013-08-30 08:41:17 UTC (rev 2941) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.UnsmoothReturn.R 2013-08-30 10:29:17 UTC (rev 2942) @@ -31,7 +31,7 @@ #' @rdname table.UnsmoothReturn #' @export table.UnsmoothReturn <- - function (R, n = 3, p= 0.95, digits = 4) + function (R, n = 2, p= 0.95, digits = 4) {# @author # DESCRIPTION: @@ -53,17 +53,17 @@ # for each column, do the following: for(column in 1:columns) { x = y[,column] - - z = c(arma(x,0,2)$theta[1], - arma(x,0,2)$se.theta[1], - arma(x,0,2)$theta[2], - arma(x,0,2)$se.theta[2], - arma(x,0,2)$se.theta[2]) + ma.stats= arma(x, order = c(0, 2)) + + z = c(as.numeric(ma.stats$coef[1]), + sqrt(as.numeric(ma.stats$vcov[1]))*100, + as.numeric(ma.stats$coef[2]), + sqrt(as.numeric(ma.stats$vcov[4]))*100,sum(as.numeric(ma.stats$coef[1:2])*as.numeric(ma.stats$coef[1:2]))) znames = c( - "Moving Average(1)", - "Std Error of MA(1)", - "Moving Average(2)", - "Std Error of MA(2)", + "MA(1)", + "Std Error of MA(1)(in %)", + "MA(2)", + "Std Error of MA(2)(in %)", "Smoothing Invest" ) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd 2013-08-30 08:41:17 UTC (rev 2941) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd 2013-08-30 10:29:17 UTC (rev 2942) @@ -2,7 +2,7 @@ \alias{AcarSim} \title{Acar-Shane Maximum Loss Plot} \usage{ - AcarSim() + AcarSim(R) } \description{ To get some insight on the relationships between maximum Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd 2013-08-30 08:41:17 UTC (rev 2941) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd 2013-08-30 10:29:17 UTC (rev 2942) @@ -59,7 +59,7 @@ head(Return.Okunev(managers[,1:3]),n=3) } \author{ - Peter Carl, Brian Peterson, Shubhankit Mohan + Shubhankit Mohan } \references{ Okunev, John and White, Derek R., \emph{ Hedge Fund Risk Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.UnsmoothReturn.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.UnsmoothReturn.Rd 2013-08-30 08:41:17 UTC (rev 2941) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.UnsmoothReturn.Rd 2013-08-30 10:29:17 UTC (rev 2942) @@ -2,7 +2,7 @@ \alias{table.UnsmoothReturn} \title{Table of Unsmooth Returns} \usage{ - table.UnsmoothReturn(R, n = 3, p = 0.95, digits = 4) + table.UnsmoothReturn(R, n = 2, p = 0.95, digits = 4) } \arguments{ \item{R}{an xts, vector, matrix, data frame, timeSeries From noreply at r-forge.r-project.org Fri Aug 30 16:04:36 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 16:04:36 +0200 (CEST) Subject: [Returnanalytics-commits] r2943 - in pkg/PortfolioAnalytics: R man Message-ID: <20130830140436.655EE18491A@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-30 16:04:36 +0200 (Fri, 30 Aug 2013) New Revision: 2943 Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd Log: Adding options to chart.EfficientFrontierOverlay. Modified: pkg/PortfolioAnalytics/R/charts.efficient.frontier.R =================================================================== --- pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-30 10:29:17 UTC (rev 2942) +++ pkg/PortfolioAnalytics/R/charts.efficient.frontier.R 2013-08-30 14:04:36 UTC (rev 2943) @@ -504,9 +504,13 @@ #' @param chart.assets TRUE/FALSE to include the assets #' @param labels.assets TRUE/FALSE to include the asset names in the plot #' @param pch.assets plotting character of the assets, same as in \code{\link{plot}} +#' @param cex.assets A numerical value giving the amount by which the asset points and labels should be magnified relative to the default. +#' @param col vector of colors with length equal to the number of portfolios in \code{portfolio_list} +#' @param lty vector of line types with length equal to the number of portfolios in \code{portfolio_list} +#' @param lwd vector of line widths with length equal to the number of portfolios in \code{portfolio_list} #' @author Ross Bennett #' @export -chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ..., chart.assets=TRUE, labels.assets=TRUE, pch.assets=21){ +chart.EfficientFrontierOverlay <- function(R, portfolio_list, type, n.portfolios=25, match.col="ES", search_size=2000, main="Efficient Frontiers", cex.axis=0.8, element.color="darkgray", legend.loc=NULL, legend.labels=NULL, cex.legend=0.8, xlim=NULL, ylim=NULL, ..., chart.assets=TRUE, labels.assets=TRUE, pch.assets=21, cex.assets=0.8, col=NULL, lty=NULL, lwd=NULL){ # create multiple efficient frontier objects (one per portfolio in portfolio_list) if(!is.list(portfolio_list)) stop("portfolio_list must be passed in as a list") if(length(portfolio_list) == 1) warning("Only one portfolio object in portfolio_list") @@ -547,6 +551,11 @@ if(labels.assets) text(x=asset_risk, y=asset_ret, labels=rnames, pos=4, cex=cex.assets) } + # set some basic plot parameters + if(is.null(col)) col <- 1:length(out) + if(is.null(lty)) lty <- 1:length(out) + if(is.null(lwd)) lwd <- rep(1, length(out)) + for(i in 1:length(out)){ tmp <- out[[i]] tmpfrontier <- tmp$frontier @@ -566,13 +575,13 @@ } if(is.na(mtc)) stop("could not match match.col with column name of extractStats output") # Add the efficient frontier lines to the plot - lines(x=tmpfrontier[, mtc], y=tmpfrontier[, mean.mtc], col=i, lty=i, lwd=2) + lines(x=tmpfrontier[, mtc], y=tmpfrontier[, mean.mtc], col=col[i], lty=lty[i], lwd=lwd[i]) } if(!is.null(legend.loc)){ if(is.null(legend.labels)){ legend.labels <- paste("Portfolio", 1:length(out), sep=".") } - legend(legend.loc, legend=legend.labels, col=1:length(out), lty=1:length(out), lwd=2, cex=cex.legend, bty="n") + legend(legend.loc, legend=legend.labels, col=col, lty=lty, lwd=lwd, cex=cex.legend, bty="n") } return(invisible(out)) } Modified: pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd =================================================================== --- pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-30 10:29:17 UTC (rev 2942) +++ pkg/PortfolioAnalytics/man/chart.EfficientFrontierOverlay.Rd 2013-08-30 14:04:36 UTC (rev 2943) @@ -9,7 +9,8 @@ legend.loc = NULL, legend.labels = NULL, cex.legend = 0.8, xlim = NULL, ylim = NULL, ..., chart.assets = TRUE, labels.assets = TRUE, - pch.assets = 21) + pch.assets = 21, cex.assets = 0.8, col = NULL, + lty = NULL, lwd = NULL) } \arguments{ \item{R}{an xts object of asset returns} @@ -67,6 +68,19 @@ \item{pch.assets}{plotting character of the assets, same as in \code{\link{plot}}} + + \item{cex.assets}{A numerical value giving the amount by + which the asset points and labels should be magnified + relative to the default.} + + \item{col}{vector of colors with length equal to the + number of portfolios in \code{portfolio_list}} + + \item{lty}{vector of line types with length equal to the + number of portfolios in \code{portfolio_list}} + + \item{lwd}{vector of line widths with length equal to the + number of portfolios in \code{portfolio_list}} } \description{ Overlay the efficient frontiers of multiple portfolio From noreply at r-forge.r-project.org Fri Aug 30 17:28:20 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 17:28:20 +0200 (CEST) Subject: [Returnanalytics-commits] r2944 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R vignettes Message-ID: <20130830152820.889A2185C34@r-forge.r-project.org> Author: pulkit Date: 2013-08-30 17:28:20 +0200 (Fri, 30 Aug 2013) New Revision: 2944 Removed: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.pdf pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R pkg/PerformanceAnalytics/sandbox/pulkit/R/na.skip.R Log: check changes Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-30 14:04:36 UTC (rev 2943) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-30 15:28:20 UTC (rev 2944) @@ -7,20 +7,14 @@ export(chart.Penance) export(chart.REDD) export(chart.SRIndifference) -export(dd_norm) -export(diff_Q) export(DrawdownGPD) export(EconomicDrawdown) export(EDDCOPS) -export(get_minq) -export(getQ) -export(get_TuW) export(golden_section) export(MaxDD) export(MinTrackRecord) export(MonteSimulTriplePenance) export(MultiBetaDrawdown) -export(na.skip) export(ProbSharpeRatio) export(PsrPortfolio) export(REDDCOPS) @@ -29,5 +23,4 @@ export(table.Penance) export(table.PSR) export(TuW) -export(tuw_norm) useDynLib(noniid.pm) Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-30 14:04:36 UTC (rev 2943) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/BenchmarkPlots.R 2013-08-30 15:28:20 UTC (rev 2944) @@ -2,7 +2,8 @@ #' #'@description #'Benchmark Sharpe Ratio Plots are used to give the relation ship between the -#'Benchmark Sharpe Ratio and average correlation,average sharpe ratio or the number of #'strategies keeping other parameters constant. Here average Sharpe ratio , average #'correlation stand for the average of all the strategies in the portfolio. The original +#'Benchmark Sharpe Ratio and average correlation,average sharpe ratio or the number of #'strategies keeping other parameters constant. +#'Here average Sharpe ratio , average correlation stand for the average of all the strategies in the portfolio. The original #'point of the return series is also shown on the plots. #' #'The equation for the Benchamark Sharpe Ratio is. Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R 2013-08-30 14:04:36 UTC (rev 2943) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/TriplePenance.R 2013-08-30 15:28:20 UTC (rev 2944) @@ -13,7 +13,7 @@ ## REFERENCE: ## Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs ## and the ?Triple Penance? Rule(January 1, 2013). -#'@export + dd_norm<-function(x,confidence){ # DESCRIPTION: # A function to return the maximum drawdown for a normal distribution @@ -30,7 +30,6 @@ return(c(dd*100,t)) } -#'@export tuw_norm<-function(x,confidence){ # DESCRIPTION: # A function to return the Time under water @@ -47,7 +46,6 @@ -#'@export get_minq<-function(R,confidence){ # DESCRIPTION: @@ -75,7 +73,6 @@ return(c(-minQ$value*100,minQ$x)) } -#'@export getQ<-function(bets,phi,mu,sigma,dp0,confidence){ # DESCRIPTION: @@ -103,7 +100,6 @@ } -#'@export get_TuW<-function(R,confidence){ # DESCRIPTION: @@ -134,7 +130,6 @@ -#'@export diff_Q<-function(bets,phi,mu,sigma,dp0,confidence){ # DESCRIPTION: Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/na.skip.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/na.skip.R 2013-08-30 14:04:36 UTC (rev 2943) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/na.skip.R 2013-08-30 15:28:20 UTC (rev 2944) @@ -1,4 +1,3 @@ -#'@export na.skip <- function (x, FUN=NULL, ...) # maybe add a trim capability? { # @author Brian Peterson Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw 2013-08-30 14:04:36 UTC (rev 2943) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.Rnw 2013-08-30 15:28:20 UTC (rev 2944) @@ -1,101 +0,0 @@ -\documentclass[12pt,letterpaper,english]{article} -\usepackage{times} -\usepackage[T1]{fontenc} -\IfFileExists{url.sty}{\usepackage{url}} - {\newcommand{\url}{\texttt}} - -\usepackage[utf8]{inputenc} -\usepackage{babel} -\usepackage{Rd} - -\usepackage{Sweave} -\SweaveOpts{engine=R,eps = FALSE} -%\VignetteIndexEntry{Probabilistic Sharpe Ratio} -%\VignetteDepends{PerformanceAnalytics} -%\VignetteKeywords{Probabilistic Sharpe Ratio,Minimum Track Record Length,risk,benchmark,portfolio} -%\VignettePackage{PerformanceAnalytics} - -\begin{document} -\SweaveOpts{concordance=TRUE} - -\title{ Probabilistic Sharpe Ratio Optimization } - -% \keywords{Probabilistic Sharpe Ratio,Minimum Track Record Length,risk,benchmark,portfolio} - -\makeatletter -\makeatother -\maketitle - -\begin{abstract} - - This vignette gives an overview of the Probabilistic Sharpe Ratio , Minimum Track Record Length and the Probabilistic Sharpe Ratio Optimization technique used to find the optimal portfolio that maximizes the Probabilistic Sharpe Ratio. It gives an overview of the usability of the functions and its application. - -A probabilistic translation of Sharpe ratio, called PSR, is proposed to account for estimation errors in an IID non-Normal framework.When assessing Sharpe ratio?s ability to evaluate skill,we find that a longer track record may be able to compensate for certain statistical shortcomings of the returns probability distribution. Stated differently, despite Sharpe ratio's well-documented deficiencies, it can still provide evidence of investment skill, as long as the user learns to require the proper track record length. - -The portfolio of hedge fund indices that maximizes Sharpe ratio can be very different from -the portfolio that delivers the highest PSR. Maximizing for PSR leads to better diversified and -more balanced hedge fund allocations compared to the concentrated outcomes of Sharpe ratio -maximization. - - - -\end{abstract} - -<>= -library(PerformanceAnalytics) -data(edhec) -library(noniid.pm) -@ - - -\section{Probabilistic Sharpe Ratio} - Given a predefined benchmark Sharpe ratio $SR^\ast$ , the observed Sharpe ratio $\hat{SR}$ can be expressed in probabilistic terms as - - \deqn{\hat{PSR}(SR^\ast) = Z\biggl[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} - - Here $n$ is the track record length or the number of data points. It can be daily,weekly or yearly depending on the input given - - \eqn{\hat{\gamma{_3}}} and \eqn{\hat{\gamma{_4}}} are the skewness and kurtosis respectively. - It is not unusual to find strategies with irregular trading frequencies, such as weekly strategies that may not trade for a month. This poses a problem when computing an annualized Sharpe ratio, and there is no consensus as how skill should be measured in the context of irregular bets. Because PSR measures skill in probabilistic terms, it is invariant to calendar conventions. All calculations are done in the original frequency -of the data, and there is no annualization. The Reference Sharpe Ratio is also given in the non-annualized form and should be greater than the Observed Sharpe Ratio. - -<<>>= -data(edhec) -ProbSharpeRatio(edhec[,1],refSR = 0.23) -@ - -\section{Minimum Track Record Length} - -If a track record is shorter than Minimum Track Record Length(MinTRL), we do -not have enough confidence that the observed \eqn{\hat{SR}} is above the designated threshold -\eqn{SR^\ast}. Minimum Track Record Length is given by the following expression. - -\deqn{MinTRL = n^\ast = 1+\biggl[1-\hat{\gamma_3}\hat{SR}+\frac{\hat{\gamma_4}}{4}\hat{SR^2}\biggr]\biggl(\frac{Z_\alpha}{\hat{SR}-SR^\ast}\biggr)^2} - -\eqn{\gamma{_3}} and \eqn{\gamma{_4}} are the skewness and kurtosis respectively. It is important to note that MinTRL is expressed in terms of number of observations, not annual or calendar terms. All the values used in the above formula are non-annualized, in the same frequency as that of the returns. - -<<>>= -data(edhec) -MinTrackRecord(edhec[,1],refSR = 0.23) -@ - -\section{Probabilistic Sharpe Ratio Optimal Portfolio} - -We would like to find the vector of weights that maximize the expression - - \deqn{\hat{PSR}(SR^\ast) = Z\biggl[\frac{(\hat{SR}-SR^\ast)\sqrt{n-1}}{\sqrt{1-\hat{\gamma_3}SR^\ast + \frac{\hat{\gamma_4}-1}{4}\hat{SR^2}}}\biggr]} - -where \eqn{\sigma = \sqrt{E[(r-\mu)^2]}} ,its standard deviation.\eqn{\gamma_3=\frac{E\biggl[(r-\mu)^3\biggr]}{\sigma^3}} its skewness,\eqn{\gamma_4=\frac{E\biggl[(r-\mu)^4\biggr]}{\sigma^4}} its kurtosis and \eqn{SR = \frac{\mu}{\sigma}} its Sharpe Ratio. - -Because \eqn{\hat{PSR}(SR^\ast)=Z[\hat{Z^\ast}]} is a monotonic increasing function of -\eqn{\hat{Z^\ast}} ,it suffices to compute the vector that maximizes \eqn{\hat{Z^\ast}} - This optimal vector is invariant of the value adopted by the parameter \eqn{SR^\ast}. - - -<<>>= -data(edhec) -PsrPortfolio(edhec) -@ - -\end{document} - Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/ProbSharpe.pdf =================================================================== (Binary files differ) Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw 2013-08-30 14:04:36 UTC (rev 2943) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/REDDCOPS.Rnw 2013-08-30 15:28:20 UTC (rev 2944) @@ -1,144 +0,0 @@ -\documentclass[12pt,letterpaper,english]{article} -\usepackage{times} -\usepackage[T1]{fontenc} -\IfFileExists{url.sty}{\usepackage{url}} - {\newcommand{\url}{\texttt}} - -\usepackage{babel} -\usepackage{Rd} - -\usepackage{Sweave} -\SweaveOpts{engine=R,eps = FALSE} -%\VignetteIndexEntry{Rolling Economic Drawdown} -%\VignetteDepends{PerformanceAnalytics} -%\VignetteKeywords{Drawdown,risk,portfolio} -%\VignettePackage{PerformanceAnalytics} - -\begin{document} -\SweaveOpts{concordance=TRUE} - -\title{ Rolling Economic Drawdown Controlled Optimal Strategy } - -% \keywords{Drawdown,risk,portfolio} - -\makeatletter -\makeatother -\maketitle - -\begin{abstract} - -Drawdown based stopouts is a framework for informing the decision of stopping a portfolio manager or investment strategy once it has reached the drawdown or time under water limit associated with a certain confidence limit. - -\end{abstract} - -<>= -library(PerformanceAnalytics) -data(edhec) -library(noniid.pm) -@ - -\section{ Rolling Economic Max } -Rolling Economic Max at time t, looking back at portfolio Wealth history -for a rolling window of length H is given by: - -\deqn{REM(t,h)=\max_{t-H \leq s}\[(1+r_f)^{t-s}W_s\]} - -Here rf is the average realized risk free rate over a period of length t-s. If the risk free rate is changing. This is used to compound. - -\deqn{ \prod_{i=s}^{t}(1+r_{i}{\triangle}t)} - - -here \eqn{r_i} denotes the risk free interest rate during \eqn{i^{th}} discrete -time interval \eqn{{\triangle}t}. - - -\subsection{Usage of the function} - -The Return Series ,risk free rate of return , lookback priod and the type of cumulative return is taken as the input. The Return Series can be an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns. - -<<>>= -data(edhec) -head(rollEconomicMax(edhec,0.08,100)) -@ - - - -\section{ Rolling Economic Drawdown } - -To calculate the rolling economic drawdown cumulative -return and rolling economic max is calculated for each point. The Return series,risk -free return(rf) and the lookback period(h) is taken as the input. -Rolling Economic Drawdown is given by the equation. - -\deqn{REDD(t,h)=1-\frac{W_t}/{REM(t,H)}} - -Here REM stands for Rolling Economic Max - -\subsection{Usage} - -The Return Series ,risk free return and the type of cumulative return is taken as the input. The Return Series can be an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns. - - -<<>>= -data(edhec) -head(rollDrawdown(edhec,0.08,100)) -@ - - -\section{ Rolling Economic Drawdown Controlled Optimal Strategy } - -The Rolling Economic Drawdown Controlled Optimal Portfolio Strategy(REDD-COPS) has -the portfolio fraction allocated to single risky asset as: - - -The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. - -For two risky assets in REDD-COPS,dynamic asset allocation weights are : - -The portion of the risk free asset is \eqn{x_f = 1 - x_1 - x_2}. - -\subsection{Usage} - -The Return series ,drawdown limit, risk free rate and the lookback period , the number of assets and the type of REDD-COPS is taken as the input. - - -<<>>= -data(edhec) -head(REDDCOPS(edhec,delta = 0.1,Rf = 0,h = 40)) -@ - -\section{ Economic Drawdown } - -To calculate the economic drawdown cumulative -return and economic max is calculated for each point. The Return series,risk -free return(rf) and the lookback period(h) is taken as the input. -Economic Drawdown is given by the equation - -\deqn{EDD(t)=1-\frac{W_t}{EM(t)}} - -Here EM stands for Economic Max. - -\subsection{ Usage} - -The Return Series ,risk free return and the type of cumulative return is taken as the input. The Return Series can be an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns. - -<<>>= -data(edhec) -head(EDDCOPS(edhec,delta = 0.1,gamma = 0.7,Rf = 0)) -@ - -\section{ Economic Drawdown Controlled Optimal Strategy } -The Economic Drawdown Controlled Optimal Portfolio Strategy(EDD-COPS) has -the portfolio fraction allocated to single risky asset as: - -\deqn{x_t = Max\left\{0,\biggl(\frac{\lambda/\sigma + 1/2}{1-\delta.\gamma}\biggr).\biggl[\frac{\delta-EDD(t)}{1-EDD(t)}\biggr]\right\}} - -The risk free asset accounts for the rest of the portfolio allocation \eqn{x_f = 1 - x_t}. - -\subsection{Usage} -<<>>= -data(edhec) -head(EDDCOPS(edhec,delta = 0.1,gamma = 0.7,Rf = 0)) -@ - -\end{document} Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw 2013-08-30 14:04:36 UTC (rev 2943) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/SharepRatioEfficientFrontier.Rnw 2013-08-30 15:28:20 UTC (rev 2944) @@ -1,76 +0,0 @@ -\documentclass[12pt,letterpaper,english]{article} -\usepackage{times} -\usepackage[T1]{fontenc} -\IfFileExists{url.sty}{\usepackage{url}} - {\newcommand{\url}{\texttt}} - -\usepackage[utf8]{inputenc} -\usepackage{babel} -\usepackage{Rd} - -\usepackage{Sweave} -\SweaveOpts{engine=R,eps = FALSE} -%\VignetteIndexEntry{Sharpe Ratio Indifference Curve} -%\VignetteDepends{PerformanceAnalytics} -%\VignetteKeywords{Benchmark Sharpe Ratio,Sharpe Ratio Indifference Curve,Benchmark Sharpe Ratio Plots} -%\VignettePackage{PerformanceAnalytics} - -\begin{document} -\SweaveOpts{concordance=TRUE} - -\title{Sharpe Ratio Indifference Curve} -% \keywords{Sharpe Ratio Indifference Curve,Benchmark Sharpe Ratio,risk,benchmark,portfolio} - -\makeatletter -\makeatother -\maketitle - -\begin{abstract} - - This vignette gives an overview of the Benchmark Sharpe Ratio, Sharpe Ratio Indifference Curve and various plots associated with a Benchmark Sharpe Ratio.It gives an overview of the usability of the functions and its application. - - \end{abstract} - -<>= -library(PerformanceAnalytics) -data(edhec) -library(noniid.pm) -@ - - - \section{Benchmark Sharpe Ratio} - - The performance of an Equal Volatility Weights benchmark (\eqn{SR_B}) is fully characterized in terms of: - -1. Number of approved strategies (S). -2. Average SR among strategies (SR). -3. Average off-diagonal correlations among strategies\eqn{\bar{\rho}}. - -The benchmark SR is a linear function of the average SR of the individual strategies, and a decreasing convex function of the number of strategies and the average pairwise correlation. - -The benchmark Sharpe Ratio is given by the following equation. - -\deqn{SR_B = \bar{SR}\sqrt{\frac{S}{1+(S-1)\bar{\rho}}}} - -<<>>= -BenchmarkSR(edhec) -@ - -\section{Sharpe Ratio Indifference Curve} - -The trade-off between a candidate?s SR and its correlation -to the existing set of strategies, is given by the Sharpe -ratio indifference curve. It is a plot between the candidate's -Sharpe Ratio and candidate's average correlation for a given -portfolio Sharpe Ratio. - -The equation for the candidate's average autocorrelation for a given -sharpe Ratio is given by - -\deqn{\bar{\rho}{_{s+1}}=\frac{1}{2}\biggl[\frac{\bar{({SR}.S+SR_{s+1}})^2}{S.SR_B^2}-\frac{S+1}{S}-\bar{\rho}{S-1}\biggr]} - -<>= -chart.SRIndifference(edhec) -@ - -\end{document} \ No newline at end of file Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw 2013-08-30 14:04:36 UTC (rev 2943) +++ pkg/PerformanceAnalytics/sandbox/pulkit/vignettes/TriplePenance.Rnw 2013-08-30 15:28:20 UTC (rev 2944) @@ -1,105 +0,0 @@ -\documentclass[12pt,letterpaper,english]{article} -\usepackage{times} -\usepackage[T1]{fontenc} -\IfFileExists{url.sty}{\usepackage{url}} - {\newcommand{\url}{\texttt}} - -\usepackage{babel} -\usepackage{Rd} - -\usepackage{Sweave} -\SweaveOpts{engine=R,eps = FALSE} -%\VignetteIndexEntry{Triple Penance Rule} -%\VignetteDepends{PerformanceAnalytics} -%\VignetteKeywords{Triple Penance Rule,Maximum Drawdown,Time under water,risk,portfolio} -%\VignettePackage{PerformanceAnalytics} - -\begin{document} -\SweaveOpts{concordance=TRUE} - -\title{ Triple Penance Rule } - -% \keywords{Triple Penance Rule,Maximum Drawdown,Time Under Water,risk,portfolio} - -\makeatletter -\makeatother -\maketitle - -\begin{abstract} - -Drawdown based stopouts is a framework for informing the decision of stopping a portfolio manager or investment strategy once it has reached the drawdown or time under water limit associated with a certain confidence limit. - -\end{abstract} - -<>= -library(PerformanceAnalytics) -data(edhec) -library(noniid.pm) -@ -\section{ Maximum Drawdown } -Maximum Drawdown tells us Up to how much could a particular strategy lose with a given confidence level ?. This function calculated Maximum Drawdown for two underlying processes normal and autoregressive. For a normal process Maximum Drawdown is given by the formula - -When the distibution is normal - -\deqn{MaxDD_{\alpha}=max\left\{0,\frac{(z_{\alpha}\sigma)^2}{4\mu}\right\}} - -The time at which the Maximum Drawdown occurs is given by - - -\deqn{t^\ast=\biggl(\frac{Z_{\alpha}\sigma}{2\mu}\biggr)^2} - -Here $Z_{\alpha}$ is the critical value of the Standard Normal Distribution associated with a probability $\alpha$.$\sigma$ and $\mu$ are the Standard Distribution and the mean respectively. - -When the distribution is non-normal and time dependent, Autoregressive process. - - -\deqn{Q_{\alpha,t}=\frac{\phi^{(t+1)}-\phi}{\phi-1}(\triangle\pi_0-\mu)+{\mu}t+Z_{\alpha}\frac{\sigma}{|\phi-1|}\biggl(\frac{\phi^{2(t+1)}-1}{\phi^2-1}-2\frac{\phi^(t+1)-1}{\phi-1}+t+1\biggr)^{1/2}} - -$\phi$ is estimated as - -\deqn{\hat{\phi} = Cov_0[\triangle\pi_\tau,\triangle\pi_{\tau-1}](Cov_0[\triangle\pi_{\tau-1},\triangle\pi_{\tau-1}])^{-1}} - - -and the Maximum Drawdown is given by. - -\deqn{MaxDD_{\alpha}=max\left\{0,-MinQ_\alpha\right\}} - -Golden Section Algorithm is used to calculate the Minimum of the function Q. - - -\subsection{Usage of the function} - -The Return Series ,confidence level and the type of distribution is taken as the input. The Return Series can be an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns. -<<>>= -data(edhec) -MaxDD(edhec,0.95,type="ar") -@ - - -The $t^\ast$ in the output is the time at which Maximum Drawdown occurs. - -\section{ Maximum Time Under Water } - -For a particular sequence $\left\{\pi_t\right\}$, the time under water $(TuW)$ is the minimum number of observations, $t>0$, such that $\pi_{t-1}<0$ and $\pi_t>0$. - -For a normal distribution Maximum Time Under Water is given by the following expression. - -\deqn{MaxTuW_\alpha=\biggl(\frac{Z_\alpha{\sigma}}{\mu}\biggr)^2} - -For a Autoregressive process the Time under water is found using the golden section algorithm. - -\subsection{Usage} -<<>>= -data(edhec) -TuW(edhec,0.95,type="ar") -@ - -The Return Series ,confidence level and the type of distribution is taken as the input. The Return Series can be an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns. - -The out is given in the same periodicity as the input series. - -\section{ Golden Section Algorithm } - - -\end{document} - From noreply at r-forge.r-project.org Fri Aug 30 20:09:31 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Fri, 30 Aug 2013 20:09:31 +0200 (CEST) Subject: [Returnanalytics-commits] r2945 - in pkg/PerformanceAnalytics/sandbox/pulkit: . R man src week7 Message-ID: <20130830180931.C94E618515F@r-forge.r-project.org> Author: pulkit Date: 2013-08-30 20:09:31 +0200 (Fri, 30 Aug 2013) New Revision: 2945 Added: pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R pkg/PerformanceAnalytics/sandbox/pulkit/week7/gpdmle.R Removed: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd pkg/PerformanceAnalytics/sandbox/pulkit/src/moment.c Log: check changes Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-30 15:28:20 UTC (rev 2944) +++ pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-30 18:09:31 UTC (rev 2945) @@ -30,8 +30,6 @@ 'DrawdownBeta.R' 'EDDCOPS.R' 'Edd.R' - 'ExtremeDrawdown.R' - 'gpdmle.R' 'GoldenSection.R' 'MaxDD.R' 'MinTRL.R' @@ -47,3 +45,6 @@ 'TriplePenance.R' 'TuW.R' 'na.skip.R' + 'capm_aorda.R' + 'psr_python.R' + 'ret.R' Modified: pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-30 15:28:20 UTC (rev 2944) +++ pkg/PerformanceAnalytics/sandbox/pulkit/NAMESPACE 2013-08-30 18:09:31 UTC (rev 2945) @@ -7,7 +7,6 @@ export(chart.Penance) export(chart.REDD) export(chart.SRIndifference) -export(DrawdownGPD) export(EconomicDrawdown) export(EDDCOPS) export(golden_section) Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-30 15:28:20 UTC (rev 2944) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R 2013-08-30 18:09:31 UTC (rev 2945) @@ -1,92 +0,0 @@ -#'@title -#'Modelling Drawdown using Extreme Value Theory -#' -#"@description -#'It has been shown empirically that Drawdowns can be modelled using Modified Generalized Pareto -#'distribution(MGPD), Generalized Pareto Distribution(GPD) and other particular cases of MGPD such -#'as weibull distribution \eqn{MGPD(\gamma,0,\psi)} and unit exponential distribution\eqn{MGPD(1,0,\psi)} -#' -#' Modified Generalized Pareto Distribution is given by the following formula -#' -#' \deqn{ -#' G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} -#' -#' Here \eqn{\gamma{\epsilon}R} is the modifying parameter. When \eqn{\gamma<1} the corresponding densities are -#' strictly decreasing with heavier tail; the GDP is recovered by setting \eqn{\gamma = 1} .\eqn{\gamma \textgreater 1} -#' -#' The GDP is given by the following equation. \eqn{MGPD(1,\eta,\psi)} -#' -#'\deqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m}{\psi}}, if \eta = 0,\end{array}} -#' -#' The weibull distribution is given by the following equation \eqn{MGPD(\gamma,0,\psi)} -#' -#'\deqn{G(m) = 1- e^{-frac{m^\gamma}{\psi}}} -#' -#'In this function weibull and generalized Pareto distribution has been covered. This function can be -#'expanded in the future to include more Extreme Value distributions as the literature on such distribution -#'matures in the future. -#' -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset return -#' @param type The type of distribution "gpd","pd","weibull" -#' @param threshold The threshold beyond which the drawdowns have to be modelled -#' -#' -#'@references -#'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). -#'Coppead Working Paper Series No. 359.Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. -#' -#'@export -DrawdownGPD<-function(R,type=c("gpd","weibull"),threshold=0.90){ - x = checkData(R) - columns = ncol(R) - columnnames = colnames(R) - type = type[1] - dr = -Drawdowns(R) - - - gpdfit<-function(data,threshold){ - if(type=="gpd"){ - gpd_fit = gpd(data,threshold) - result = list(shape = gpd_fit$param[2],scale = gpd_fit$param[1]) - return(result) - } - if(type=="wiebull"){ - # From package MASS - if(any( data<= 0)) stop("Weibull values must be > 0") - lx <- log(data) - m <- mean(lx); v <- var(lx) - shape <- 1.2/sqrt(v); scale <- exp(m + 0.572/shape) - result <- list(shape = shape, scale = scale) - return(result) - } - } - for(column in 1:columns){ - data = sort(as.vector(dr[,column])) - threshold = data[threshold*nrow(R)] - column.parameters <- gpdfit(data,threshold) - if(column == 1){ - shape = column.parameters$shape - scale = column.parameters$scale - } - else { - scale = merge(scale, column.parameters$scale) - shape = merge(shape, column.parameters$shape) - print(scale) - print(shape) - } - } - parameters = rbind(scale,shape) - colnames(parameters) = columnnames - parameters = reclass(parameters, x) - rownames(parameters)=c("scale","shape") - return(parameters) -} - - - - - - - - - Deleted: pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R 2013-08-30 15:28:20 UTC (rev 2944) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R 2013-08-30 18:09:31 UTC (rev 2945) @@ -1,172 +0,0 @@ -## This function comes from the package "POT" . The gpd function -## corresponds to the gpdmle function. So, I'm very gratefull to Mathieu Ribatet. - -gpd <- function(x, threshold, start, ..., - std.err.type = "observed", corr = FALSE, - method = "BFGS", warn.inf = TRUE){ - - if (all(c("observed", "expected", "none") != std.err.type)) - stop("``std.err.type'' must be one of 'observed', 'expected' or 'none'") - - nlpot <- function(scale, shape) { - -.C("gpdlik", exceed, nat, threshold, scale, - shape, dns = double(1), PACKAGE = "POT")$dns - } - - nn <- length(x) - - threshold <- rep(threshold, length.out = nn) - - high <- (x > threshold) & !is.na(x) - threshold <- as.double(threshold[high]) - exceed <- as.double(x[high]) - nat <- length(exceed) - - if(!nat) stop("no data above threshold") - - pat <- nat/nn - param <- c("scale", "shape") - - if(missing(start)) { - - start <- list(scale = 0, shape = 0) - start$scale <- mean(exceed) - min(threshold) - - start <- start[!(param %in% names(list(...)))] - - } - - if(!is.list(start)) - stop("`start' must be a named list") - - if(!length(start)) - stop("there are no parameters left to maximize over") - - nm <- names(start) - l <- length(nm) - f <- formals(nlpot) - names(f) <- param - m <- match(nm, param) - - if(any(is.na(m))) - stop("`start' specifies unknown arguments") - - formals(nlpot) <- c(f[m], f[-m]) - nllh <- function(p, ...) nlpot(p, ...) - - if(l > 1) - body(nllh) <- parse(text = paste("nlpot(", paste("p[",1:l, - "]", collapse = ", "), ", ...)")) - - fixed.param <- list(...)[names(list(...)) %in% param] - - if(any(!(param %in% c(nm,names(fixed.param))))) - stop("unspecified parameters") - - start.arg <- c(list(p = unlist(start)), fixed.param) - if( warn.inf && do.call("nllh", start.arg) == 1e6 ) - warning("negative log-likelihood is infinite at starting values") - - opt <- optim(start, nllh, hessian = TRUE, ..., method = method) - - if ((opt$convergence != 0) || (opt$value == 1e6)) { - warning("optimization may not have succeeded") - if(opt$convergence == 1) opt$convergence <- "iteration limit reached" - } - - else opt$convergence <- "successful" - - if (std.err.type != "none"){ - - tol <- .Machine$double.eps^0.5 - - if(std.err.type == "observed") { - - var.cov <- qr(opt$hessian, tol = tol) - if(var.cov$rank != ncol(var.cov$qr)){ - warning("observed information matrix is singular; passing std.err.type to ``expected''") - obs.fish <- FALSE - return - } - - if (std.err.type == "observed"){ - var.cov <- try(solve(var.cov, tol = tol), silent = TRUE) - - if(!is.matrix(var.cov)){ - warning("observed information matrix is singular; passing std.err.type to ''none''") - std.err.type <- "expected" - return - } - - else{ - std.err <- diag(var.cov) - if(any(std.err <= 0)){ - warning("observed information matrix is singular; passing std.err.type to ``expected''") - std.err.type <- "expected" - return - } - - std.err <- sqrt(std.err) - - if(corr) { - .mat <- diag(1/std.err, nrow = length(std.err)) - corr.mat <- structure(.mat %*% var.cov %*% .mat, dimnames = list(nm,nm)) - diag(corr.mat) <- rep(1, length(std.err)) - } - else { - corr.mat <- NULL - } - } - } - } - - if (std.err.type == "expected"){ - - shape <- opt$par[2] - scale <- opt$par[1] - a22 <- 2/((1+shape)*(1+2*shape)) - a12 <- 1/(scale*(1+shape)*(1+2*shape)) - a11 <- 1/((scale^2)*(1+2*shape)) - ##Expected Matix of Information of Fisher - expFisher <- nat * matrix(c(a11,a12,a12,a22),nrow=2) - - expFisher <- qr(expFisher, tol = tol) - var.cov <- solve(expFisher, tol = tol) - std.err <- sqrt(diag(var.cov)) - - if(corr) { - .mat <- diag(1/std.err, nrow = length(std.err)) - corr.mat <- structure(.mat %*% var.cov %*% .mat, dimnames = list(nm,nm)) - diag(corr.mat) <- rep(1, length(std.err)) - } - else - corr.mat <- NULL - } - - colnames(var.cov) <- nm - rownames(var.cov) <- nm - names(std.err) <- nm - } - - else{ - std.err <- std.err.type <- corr.mat <- NULL - var.cov <- NULL - } - - - param <- c(opt$par, unlist(fixed.param)) - scale <- param["scale"] - - var.thresh <- !all(threshold == threshold[1]) - - if (!var.thresh) - threshold <- threshold[1] - - list(fitted.values = opt$par, std.err = std.err, std.err.type = std.err.type, - var.cov = var.cov, fixed = unlist(fixed.param), param = param, - deviance = 2*opt$value, corr = corr.mat, convergence = opt$convergence, - counts = opt$counts, message = opt$message, threshold = threshold, - nat = nat, pat = pat, data = x, exceed = exceed, scale = scale, - var.thresh = var.thresh, est = "MLE", logLik = -opt$value, - opt.value = opt$value, hessian = opt$hessian) -} Modified: pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd 2013-08-30 15:28:20 UTC (rev 2944) +++ pkg/PerformanceAnalytics/sandbox/pulkit/man/chart.BenchmarkSR.Rd 2013-08-30 18:09:31 UTC (rev 2945) @@ -55,8 +55,8 @@ relation ship between the Benchmark Sharpe Ratio and average correlation,average sharpe ratio or the number of #'strategies keeping other parameters constant. Here - average Sharpe ratio , average #'correlation stand for - the average of all the strategies in the portfolio. The + average Sharpe ratio , average correlation stand for the + average of all the strategies in the portfolio. The original point of the return series is also shown on the plots. Modified: pkg/PerformanceAnalytics/sandbox/pulkit/src/moment.c =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/src/moment.c 2013-08-30 15:28:20 UTC (rev 2944) +++ pkg/PerformanceAnalytics/sandbox/pulkit/src/moment.c 2013-08-30 18:09:31 UTC (rev 2945) @@ -56,5 +56,5 @@ return Rsum; } - + Copied: pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R (from rev 2943, pkg/PerformanceAnalytics/sandbox/pulkit/R/ExtremeDrawdown.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week7/ExtremeDrawdown.R 2013-08-30 18:09:31 UTC (rev 2945) @@ -0,0 +1,92 @@ +#'@title +#'Modelling Drawdown using Extreme Value Theory +#' +#"@description +#'It has been shown empirically that Drawdowns can be modelled using Modified Generalized Pareto +#'distribution(MGPD), Generalized Pareto Distribution(GPD) and other particular cases of MGPD such +#'as weibull distribution \eqn{MGPD(\gamma,0,\psi)} and unit exponential distribution\eqn{MGPD(1,0,\psi)} +#' +#' Modified Generalized Pareto Distribution is given by the following formula +#' +#' \deqn{ +#' G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m^\gamma}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m^\gamma}{\psi}}, if \eta = 0,\end{array}} +#' +#' Here \eqn{\gamma{\epsilon}R} is the modifying parameter. When \eqn{\gamma<1} the corresponding densities are +#' strictly decreasing with heavier tail; the GDP is recovered by setting \eqn{\gamma = 1} .\eqn{\gamma \textgreater 1} +#' +#' The GDP is given by the following equation. \eqn{MGPD(1,\eta,\psi)} +#' +#'\deqn{G_{\eta}(m) = \begin{array}{l} 1-(1+\eta\frac{m}{\psi})^(-1/\eta), if \eta \neq 0 \\ 1- e^{-frac{m}{\psi}}, if \eta = 0,\end{array}} +#' +#' The weibull distribution is given by the following equation \eqn{MGPD(\gamma,0,\psi)} +#' +#'\deqn{G(m) = 1- e^{-frac{m^\gamma}{\psi}}} +#' +#'In this function weibull and generalized Pareto distribution has been covered. This function can be +#'expanded in the future to include more Extreme Value distributions as the literature on such distribution +#'matures in the future. +#' +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset return +#' @param type The type of distribution "gpd","pd","weibull" +#' @param threshold The threshold beyond which the drawdowns have to be modelled +#' +#' +#'@references +#'Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and Applications (November 2003). +#'Coppead Working Paper Series No. 359.Available at SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322. +#' +#'@export +DrawdownGPD<-function(R,type=c("gpd","weibull"),threshold=0.90){ + x = checkData(R) + columns = ncol(R) + columnnames = colnames(R) + type = type[1] + dr = -Drawdowns(R) + + + gpdfit<-function(data,threshold){ + if(type=="gpd"){ + gpd_fit = gpd(data,threshold) + result = list(shape = gpd_fit$param[2],scale = gpd_fit$param[1]) + return(result) + } + if(type=="wiebull"){ + # From package MASS + if(any( data<= 0)) stop("Weibull values must be > 0") + lx <- log(data) + m <- mean(lx); v <- var(lx) + shape <- 1.2/sqrt(v); scale <- exp(m + 0.572/shape) + result <- list(shape = shape, scale = scale) + return(result) + } + } + for(column in 1:columns){ + data = sort(as.vector(dr[,column])) + threshold = data[threshold*nrow(R)] + column.parameters <- gpdfit(data,threshold) + if(column == 1){ + shape = column.parameters$shape + scale = column.parameters$scale + } + else { + scale = merge(scale, column.parameters$scale) + shape = merge(shape, column.parameters$shape) + print(scale) + print(shape) + } + } + parameters = rbind(scale,shape) + colnames(parameters) = columnnames + parameters = reclass(parameters, x) + rownames(parameters)=c("scale","shape") + return(parameters) +} + + + + + + + + + Copied: pkg/PerformanceAnalytics/sandbox/pulkit/week7/gpdmle.R (from rev 2943, pkg/PerformanceAnalytics/sandbox/pulkit/R/gpdmle.R) =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/week7/gpdmle.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/pulkit/week7/gpdmle.R 2013-08-30 18:09:31 UTC (rev 2945) @@ -0,0 +1,172 @@ +## This function comes from the package "POT" . The gpd function +## corresponds to the gpdmle function. So, I'm very gratefull to Mathieu Ribatet. +#'@useDynLib gpd +gpd <- function(x, threshold, start, ..., + std.err.type = "observed", corr = FALSE, + method = "BFGS", warn.inf = TRUE){ + + if (all(c("observed", "expected", "none") != std.err.type)) + stop("``std.err.type'' must be one of 'observed', 'expected' or 'none'") + + nlpot <- function(scale, shape) { + -.C("gpdlik", exceed, nat, threshold, scale, + shape, dns = double(1))$dns + } + + nn <- length(x) + + threshold <- rep(threshold, length.out = nn) + + high <- (x > threshold) & !is.na(x) + threshold <- as.double(threshold[high]) + exceed <- as.double(x[high]) + nat <- length(exceed) + + if(!nat) stop("no data above threshold") + + pat <- nat/nn + param <- c("scale", "shape") + + if(missing(start)) { + + start <- list(scale = 0, shape = 0) + start$scale <- mean(exceed) - min(threshold) + + start <- start[!(param %in% names(list(...)))] + + } + + if(!is.list(start)) + stop("`start' must be a named list") + + if(!length(start)) + stop("there are no parameters left to maximize over") + + nm <- names(start) + l <- length(nm) + f <- formals(nlpot) + names(f) <- param + m <- match(nm, param) + + if(any(is.na(m))) + stop("`start' specifies unknown arguments") + + formals(nlpot) <- c(f[m], f[-m]) + nllh <- function(p, ...) nlpot(p, ...) + + if(l > 1) + body(nllh) <- parse(text = paste("nlpot(", paste("p[",1:l, + "]", collapse = ", "), ", ...)")) + + fixed.param <- list(...)[names(list(...)) %in% param] + + if(any(!(param %in% c(nm,names(fixed.param))))) + stop("unspecified parameters") + + start.arg <- c(list(p = unlist(start)), fixed.param) + if( warn.inf && do.call("nllh", start.arg) == 1e6 ) + warning("negative log-likelihood is infinite at starting values") + + opt <- optim(start, nllh, hessian = TRUE, ..., method = method) + + if ((opt$convergence != 0) || (opt$value == 1e6)) { + warning("optimization may not have succeeded") + if(opt$convergence == 1) opt$convergence <- "iteration limit reached" + } + + else opt$convergence <- "successful" + + if (std.err.type != "none"){ + + tol <- .Machine$double.eps^0.5 + + if(std.err.type == "observed") { + + var.cov <- qr(opt$hessian, tol = tol) + if(var.cov$rank != ncol(var.cov$qr)){ + warning("observed information matrix is singular; passing std.err.type to ``expected''") + obs.fish <- FALSE + return + } + + if (std.err.type == "observed"){ + var.cov <- try(solve(var.cov, tol = tol), silent = TRUE) + + if(!is.matrix(var.cov)){ + warning("observed information matrix is singular; passing std.err.type to ''none''") + std.err.type <- "expected" + return + } + + else{ + std.err <- diag(var.cov) + if(any(std.err <= 0)){ + warning("observed information matrix is singular; passing std.err.type to ``expected''") + std.err.type <- "expected" + return + } + + std.err <- sqrt(std.err) + + if(corr) { + .mat <- diag(1/std.err, nrow = length(std.err)) + corr.mat <- structure(.mat %*% var.cov %*% .mat, dimnames = list(nm,nm)) + diag(corr.mat) <- rep(1, length(std.err)) + } + else { + corr.mat <- NULL + } + } + } + } + + if (std.err.type == "expected"){ + + shape <- opt$par[2] + scale <- opt$par[1] + a22 <- 2/((1+shape)*(1+2*shape)) + a12 <- 1/(scale*(1+shape)*(1+2*shape)) + a11 <- 1/((scale^2)*(1+2*shape)) + ##Expected Matix of Information of Fisher + expFisher <- nat * matrix(c(a11,a12,a12,a22),nrow=2) + + expFisher <- qr(expFisher, tol = tol) + var.cov <- solve(expFisher, tol = tol) + std.err <- sqrt(diag(var.cov)) + + if(corr) { + .mat <- diag(1/std.err, nrow = length(std.err)) + corr.mat <- structure(.mat %*% var.cov %*% .mat, dimnames = list(nm,nm)) + diag(corr.mat) <- rep(1, length(std.err)) + } + else + corr.mat <- NULL + } + + colnames(var.cov) <- nm + rownames(var.cov) <- nm + names(std.err) <- nm + } + + else{ + std.err <- std.err.type <- corr.mat <- NULL + var.cov <- NULL + } + + + param <- c(opt$par, unlist(fixed.param)) + scale <- param["scale"] + + var.thresh <- !all(threshold == threshold[1]) + + if (!var.thresh) + threshold <- threshold[1] + + list(fitted.values = opt$par, std.err = std.err, std.err.type = std.err.type, + var.cov = var.cov, fixed = unlist(fixed.param), param = param, + deviance = 2*opt$value, corr = corr.mat, convergence = opt$convergence, + counts = opt$counts, message = opt$message, threshold = threshold, + nat = nat, pat = pat, data = x, exceed = exceed, scale = scale, + var.thresh = var.thresh, est = "MLE", logLik = -opt$value, + opt.value = opt$value, hessian = opt$hessian) +} From noreply at r-forge.r-project.org Sat Aug 31 00:02:54 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 00:02:54 +0200 (CEST) Subject: [Returnanalytics-commits] r2946 - in pkg/PortfolioAnalytics: R man sandbox Message-ID: <20130830220254.B34861851C7@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-31 00:02:53 +0200 (Sat, 31 Aug 2013) New Revision: 2946 Added: pkg/PortfolioAnalytics/sandbox/testing_diversification.R Modified: pkg/PortfolioAnalytics/R/constraints.R pkg/PortfolioAnalytics/R/optFUN.R pkg/PortfolioAnalytics/R/optimize.portfolio.R pkg/PortfolioAnalytics/man/diversification_constraint.Rd pkg/PortfolioAnalytics/man/gmv_opt.Rd Log: Adding diversification for quadratic utility and min var problems. This is implemented as an over concentration penalty in the objective. Modified: pkg/PortfolioAnalytics/R/constraints.R =================================================================== --- pkg/PortfolioAnalytics/R/constraints.R 2013-08-30 18:09:31 UTC (rev 2945) +++ pkg/PortfolioAnalytics/R/constraints.R 2013-08-30 22:02:53 UTC (rev 2946) @@ -706,6 +706,7 @@ } if(inherits(constraint, "diversification_constraint")){ out$div_target <- constraint$div_target + out$conc_aversion <- constraint$conc_aversion } if(inherits(constraint, "position_limit_constraint")){ out$max_pos <- constraint$max_pos @@ -783,6 +784,8 @@ #' #' @param type character type of the constraint #' @param div_target diversification target value +#' @param conc_aversion concentration aversion parameter. Penalizes over +#' concentration for quadratic utility and minimum variance problems. #' @param enabled TRUE/FALSE #' @param message TRUE/FALSE. The default is message=FALSE. Display messages if TRUE. #' @param \dots any other passthru parameters to specify box and/or group constraints @@ -796,9 +799,10 @@ #' #' pspec <- add.constraint(portfolio=pspec, type="diversification", div_target=0.7) #' @export -diversification_constraint <- function(type="diversification", div_target, enabled=TRUE, message=FALSE, ...){ +diversification_constraint <- function(type="diversification", div_target=NULL, conc_aversion=NULL, enabled=TRUE, message=FALSE, ...){ Constraint <- constraint_v2(type, enabled=enabled, constrclass="diversification_constraint", ...) Constraint$div_target <- div_target + Constraint$conc_aversion <- conc_aversion return(Constraint) } Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-30 18:09:31 UTC (rev 2945) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-30 22:02:53 UTC (rev 2946) @@ -9,8 +9,9 @@ #' @param moments object of moments computed based on objective functions #' @param lambda risk_aversion parameter #' @param target target return value +#' @param lambda_hhi concentration aversion parameter #' @author Ross Bennett -gmv_opt <- function(R, constraints, moments, lambda, target){ +gmv_opt <- function(R, constraints, moments, lambda, target, lambda_hhi){ N <- ncol(R) # Applying box constraints @@ -57,9 +58,14 @@ rhs.vec <- c(rhs.vec, constraints$lower, -constraints$upper) } + print(constraints$conc_aversion) + print(lambda_hhi) # set up the quadratic objective - ROI_objective <- Q_objective(Q=2*lambda*moments$var, L=-moments$mean) - + if(!is.null(constraints$conc_aversion)){ + ROI_objective <- Q_objective(Q=2*lambda*moments$var + lambda_hhi * diag(N), L=-moments$mean) + } else { + ROI_objective <- Q_objective(Q=2*lambda*moments$var, L=-moments$mean) + } # set up the optimization problem and solve opt.prob <- OP(objective=ROI_objective, constraints=L_constraint(L=Amat, dir=dir.vec, rhs=rhs.vec), Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-30 18:09:31 UTC (rev 2945) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-30 22:02:53 UTC (rev 2946) @@ -683,6 +683,11 @@ } else { target <- NA } + if(!is.null(constraints$conc_aversion)){ + lambda_hhi <- constraints$conc_aversion + } else { + lambda_hhi <- 0 + } lambda <- 1 for(objective in portfolio$objectives){ if(objective$enabled){ @@ -709,7 +714,7 @@ obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) } else { - roi_result <- gmv_opt(R=R, constraints=constraints, moments=moments, lambda=lambda, target=target) + roi_result <- gmv_opt(R=R, constraints=constraints, moments=moments, lambda=lambda, target=target, lambda_hhi=lambda_hhi) weights <- roi_result$weights obj_vals <- constrained_objective(w=weights, R=R, portfolio, trace=TRUE, normalize=FALSE)$objective_measures out <- list(weights=weights, objective_measures=obj_vals, opt_values=obj_vals, out=roi_result$out, call=call) Modified: pkg/PortfolioAnalytics/man/diversification_constraint.Rd =================================================================== --- pkg/PortfolioAnalytics/man/diversification_constraint.Rd 2013-08-30 18:09:31 UTC (rev 2945) +++ pkg/PortfolioAnalytics/man/diversification_constraint.Rd 2013-08-30 22:02:53 UTC (rev 2946) @@ -3,13 +3,18 @@ \title{constructor for diversification_constraint} \usage{ diversification_constraint(type = "diversification", - div_target, enabled = TRUE, message = FALSE, ...) + div_target = NULL, conc_aversion = NULL, + enabled = TRUE, message = FALSE, ...) } \arguments{ \item{type}{character type of the constraint} \item{div_target}{diversification target value} + \item{conc_aversion}{concentration aversion parameter. + Penalizes over concentration for quadratic utility and + minimum variance problems.} + \item{enabled}{TRUE/FALSE} \item{message}{TRUE/FALSE. The default is message=FALSE. Modified: pkg/PortfolioAnalytics/man/gmv_opt.Rd =================================================================== --- pkg/PortfolioAnalytics/man/gmv_opt.Rd 2013-08-30 18:09:31 UTC (rev 2945) +++ pkg/PortfolioAnalytics/man/gmv_opt.Rd 2013-08-30 22:02:53 UTC (rev 2946) @@ -2,7 +2,8 @@ \alias{gmv_opt} \title{Optimization function to solve minimum variance or maximum quadratic utility problems} \usage{ - gmv_opt(R, constraints, moments, lambda, target) + gmv_opt(R, constraints, moments, lambda, target, + lambda_hhi) } \arguments{ \item{R}{xts object of asset returns} @@ -16,6 +17,8 @@ \item{lambda}{risk_aversion parameter} \item{target}{target return value} + + \item{lambda_hhi}{concentration aversion parameter} } \description{ This function is called by optimize.portfolio to solve Added: pkg/PortfolioAnalytics/sandbox/testing_diversification.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_diversification.R (rev 0) +++ pkg/PortfolioAnalytics/sandbox/testing_diversification.R 2013-08-30 22:02:53 UTC (rev 2946) @@ -0,0 +1,49 @@ +library(PortfolioAnalytics) +library(ROI) +library(ROI.plugin.quadprog) + +data(edhec) +R <- edhec[, 1:4] +funds <- colnames(R) + +init <- portfolio.spec(assets=funds) +init <- add.constraint(portfolio=init, type="full_investment") +init <- add.constraint(portfolio=init, type="long_only") +init <- add.constraint(portfolio=init, type="diversification", + conc_aversion=1, enabled=FALSE) + +minvar <- add.objective(portfolio=init, type="risk", name="var") + +qu <- add.objective(portfolio=init, type="risk", name="var", risk_aversion=1e6) +qu <- add.objective(portfolio=qu, type="return", name="mean") + +# minimum variance optimization with full investment and long only constraints +opt_mv <- optimize.portfolio(R=R, portfolio=minvar, optimize_method="ROI", trace=TRUE) + +# minimum variance optimization with full investment, long only, and diversification constraints +minvar$constraints[[3]]$enabled=TRUE +minvar$constraints[[3]]$conc_aversion=0 +opt_mv_div1 <- optimize.portfolio(R=R, portfolio=minvar, optimize_method="ROI", trace=TRUE) + +# The concentration aversion parameter is zero so we should have the same +# result as opt_mv +all.equal(opt_mv$weights, opt_mv_div1$weights) + +# Making the conc_aversion parameter high enough should result in an equal +# weight portfolio. +minvar$constraints[[3]]$conc_aversion=20 +opt_mv_div2 <- optimize.portfolio(R=R, portfolio=minvar, optimize_method="ROI", trace=TRUE) + +# Now using quadratic utility +opt_qu <- optimize.portfolio(R=R, portfolio=qu, optimize_method="ROI", trace=TRUE) +# equal to the minvar portfolio to 4 significant digits +all.equal(signif(opt_qu$weights, 4), signif(opt_mv$weights, 4)) + +# both the risk aversion and concentration aversion parameters will have to be +# adjusted to result in an equal weight portfolio +qu$constraints[[3]]$enabled=TRUE +qu$constraints[[3]]$conc_aversion=1e6 +qu$objectives[[1]]$risk_aversion=1 +opt_mv_qu <- optimize.portfolio(R=R, portfolio=qu, optimize_method="ROI", trace=TRUE) +opt_mv_qu$weights + From noreply at r-forge.r-project.org Sat Aug 31 02:39:22 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 02:39:22 +0200 (CEST) Subject: [Returnanalytics-commits] r2947 - in pkg/PortfolioAnalytics: R sandbox Message-ID: <20130831003922.483A6185DCB@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-31 02:39:21 +0200 (Sat, 31 Aug 2013) New Revision: 2947 Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R pkg/PortfolioAnalytics/sandbox/testing_risk_budget.R Log: Adding checks to arguments that are passed in to set portfolio moments. Fixing error that was caused when clean='boudt' and dots arguments were passed in to optimize.portfolio Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-30 22:02:53 UTC (rev 2946) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-31 00:39:21 UTC (rev 2947) @@ -490,15 +490,33 @@ constraints <- get_constraints(portfolio) # set portfolio moments only once + # For set.portfolio.moments, we are passing the returns, + # portfolio object, and dotargs. dotargs is a list of arguments + # that are passed in as dots in optimize.portfolio. This was + # causing errors if clean="boudt" was specified in an objective + # and an argument such as itermax was passed in as dots to + # optimize.portfolio. See r2931 if(!is.function(momentFUN)){ momentFUN <- match.fun(momentFUN) } # TODO FIXME should match formals later #dotargs <- set.portfolio.moments(R, constraints, momentargs=dotargs) .mformals <- dotargs - .mformals$R <- R - .mformals$portfolio <- portfolio - mout <- try((do.call(momentFUN,.mformals)) ,silent=TRUE) + #.mformals$R <- R + #.mformals$portfolio <- portfolio + .formals <- formals(momentFUN) + onames <- names(.formals) + if (length(.mformals)) { + dargs <- .mformals + pm <- pmatch(names(dargs), onames, nomatch = 0L) + names(dargs[pm > 0L]) <- onames[pm] + .formals[pm] <- dargs[pm > 0L] + } + .formals$R <- R + .formals$portfolio <- portfolio + .formals$... <- NULL + + mout <- try((do.call(momentFUN, .formals)) ,silent=TRUE) if(inherits(mout,"try-error")) { message(paste("portfolio moment function failed with message",mout)) } else { Modified: pkg/PortfolioAnalytics/sandbox/testing_risk_budget.R =================================================================== --- pkg/PortfolioAnalytics/sandbox/testing_risk_budget.R 2013-08-30 22:02:53 UTC (rev 2946) +++ pkg/PortfolioAnalytics/sandbox/testing_risk_budget.R 2013-08-31 00:39:21 UTC (rev 2947) @@ -54,12 +54,14 @@ # clean.boudt. # Error in clean.boudt(na.omit(R[, column, drop = FALSE]), alpha = alpha, : # unused argument(s) (itermax = 50) + +# The error appears to have been fixed set.seed(1234) opt <- optimize.portfolio(R=R.clean, portfolio=min_conc_clean, optimize_method="DEoptim", search_size=5000, itermax=50) -traceback() + # Upon insepecting traceback(), it looks like the error is due to # Return.clean(R, method = objective$arguments.clean, ...) where the dots # are picking up the dots arguments from optimize.portfolio. Is there a way From noreply at r-forge.r-project.org Sat Aug 31 12:39:53 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 12:39:53 +0200 (CEST) Subject: [Returnanalytics-commits] r2948 - in pkg/PerformanceAnalytics/sandbox/Shubhankit: R man noniid.sm noniid.sm/R noniid.sm/man vignettes Message-ID: <20130831103953.BCECD184E52@r-forge.r-project.org> Author: shubhanm Date: 2013-08-31 12:39:53 +0200 (Sat, 31 Aug 2013) New Revision: 2948 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV-concordance.tex pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV.log pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV.synctex.gz pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV.tex Removed: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/maxDDGBM.R Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/R/UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/UnsmoothReturn.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.EMaxDDGBM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/EMaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/GLMSmoothIndex.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/chart.AcarSim.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/quad.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EmaxDDGBM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV-Graph10.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/vignettes/ACFSTDEV.rnw Log: Addition of material in package Documentation editing Complete removal of warning messages in checking of code/Rd file in R Check Process Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/GLMSmoothIndex.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -17,10 +17,11 @@ #' #' @keywords ts multivariate distribution models non-iid #' @examples +#' require(PerformanceAnalytics) +#' library(PerformanceAnalytics) +#' data(edhec) +#' GLMSmoothIndex(edhec) #' -#' data(edhec) -#' head(GLMSmoothIndex(edhec)) -#' #' @export GLMSmoothIndex<- function(R = NULL, ...) @@ -33,7 +34,7 @@ columns = ncol(x) n = nrow(x) count = q - x=edhec + columns = ncol(x) columnnames = colnames(x) @@ -59,6 +60,7 @@ return(result.df) } + edhec=NULL } ############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/Return.Okunev.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -16,8 +16,14 @@ #' #'To remove the \bold{first m orders} of autocorrelation from a given return series we would proceed in a manner very similar to that detailed in \bold{ \code{\link{Return.Geltner}} \cr}. We would initially remove the first order autocorrelation, then proceed to eliminate the second order autocorrelation through the iteration process. In general, to remove any order, m, autocorrelations from a given return series we would make the following transformation to returns: #' autocorrelation structure in the original return series without making any assumptions regarding the actual time series properties of the underlying process. We are implicitly assuming by this approach that the autocorrelations that arise in reported returns are entirely due to the smoothing behavior funds engage in when reporting results. In fact, the method may be adopted to produce any desired level of autocorrelation at any lag and is not limited to simply eliminating all autocorrelations. +#' @param +#' R : an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @param +#' q : order of autocorrelation coefficient lag factors #' #' +#' #' @references Okunev, John and White, Derek R., \emph{ Hedge Fund Risk Factors and Value at Risk of Credit Trading Strategies} (October 2003). #' Available at SSRN: \url{http://ssrn.com/abstract=460641} #' @author Peter Carl, Brian Peterson, Shubhankit Mohan @@ -41,7 +47,7 @@ } return(c(column.okunev)) } -#' Recusrsive Okunev Call Function + quad <- function(R,d) { coeff = as.numeric(acf(as.numeric(edhec[,1]), plot = FALSE)[1:2][[1]]) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/R/UnsmoothReturn.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/R/UnsmoothReturn.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/R/UnsmoothReturn.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -9,13 +9,12 @@ columns = ncol(x) n = nrow(x) count = q - x=edhec columns = ncol(x) columnnames = colnames(x) # Calculate AutoCorrelation Coefficient for(column in 1:columns) { # for each asset passed in as R - y = checkData(edhec[,column], method="vector", na.rm = TRUE) + y = checkData(R[,column], method="vector", na.rm = TRUE) acflag6 = acf(y,plot=FALSE,lag.max=6)[[1]][2:7] values = sum(acflag6*acflag6)/(sum(acflag6)*sum(acflag6)) Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/man/quad.Rd 2013-08-31 10:39:53 UTC (rev 2948) @@ -1,10 +0,0 @@ -\name{quad} -\alias{quad} -\title{Recusrsive Okunev Call Function} -\usage{ - quad(R, d) -} -\description{ - Recusrsive Okunev Call Function -} - Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION 2013-08-31 10:39:53 UTC (rev 2948) @@ -24,7 +24,6 @@ 'chart.Autocorrelation.R' 'GLMSmoothIndex.R' 'LoSharpe.R' - 'maxDDGBM.R' 'na.skip.R' 'noniid.sm-internal.R' 'Return.GLM.R' @@ -34,6 +33,6 @@ 'table.ComparitiveReturn.GLM.R' 'table.UnsmoothReturn.R' 'UnsmoothReturn.R' - 'EmaxDDGBM.R' + 'EMaxDDGBM.R' 'table.EMaxDDGBM.R' - + 'QP.Norm.R' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -10,6 +10,8 @@ #' divided by volatility can be interpreted as the only function of the ratio mean divided by volatility. #' \deqn{MD/[\sigma]= Min (\sum[X(j)])/\sigma = F(\mu/\sigma)} #' Where j varies from 1 to n ,which is the number of drawdown's in simulation +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns #' @author Shubhankit Mohan #' @references Maximum Loss and Maximum Drawdown in Financial Markets,\emph{International Conference Sponsored by BNP and Imperial College on: #' Forecasting Financial Markets, London, United Kingdom, May 1997} \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} @@ -22,9 +24,8 @@ AcarSim <- function(R) { - - library(PerformanceAnalytics) - get("edhec") + library(PerformanceAnalytics) + data(edhec) R = checkData(edhec, method="xts") @@ -86,6 +87,7 @@ title("Maximum Drawdown/Volatility as a function of Return/Volatility 36 monthly returns simulated 6,000 times") + edhec=NULL } ############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/EmaxDDGBM.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -1,29 +1,17 @@ -#' @title Expected Drawdown using Brownian Motion Assumptions +#' Expected Drawdown using Brownian Motion Assumptions #' -#' @description Works on the model specified by Maddon-Ismail which investigates the behavior of this statistic for a Brownian motion -#' with drift. -#' @details If X(t) is a random process on [0, T ], the maximum drawdown at time T , D(T), is defined by -#' where \deqn{D(T) = sup [X(s) - X(t)]} where s belongs to [0,t] and s belongs to [0,T] -#'Informally, this is the largest drop from a peak to a bottom. In this paper, we investigate the -#'behavior of this statistic for a Brownian motion with drift. In particular, we give an infinite -#'series representation of its distribution, and consider its expected value. When the drift is zero, -#'we give an analytic expression for the expected value, and for non-zero drift, we give an infinite -#'series representation. For all cases, we compute the limiting \bold{(\eqn{T tends to \infty})} behavior, which can be -#'logarithmic (\eqn{\mu} > 0), square root (\eqn{\mu} = 0), or linear (\eqn{\mu} < 0). -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of asset returns +#' Works on the model specified by Maddon-Ismail +#' +#' +#' +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns #' @param digits significant number -#' @author Shubhankit Mohan +#' @author Shubhankit #' @keywords Expected Drawdown Using Brownian Motion Assumptions -#' @references Magdon-Ismail, M., Atiya, A., Pratap, A., and Yaser S. Abu-Mostafa: On the Maximum Drawdown of a Browninan Motion, Journal of Applied Probability 41, pp. 147-161, 2004 \url{http://alumnus.caltech.edu/~amir/drawdown-jrnl.pdf} -#' @keywords Drawdown models Brownian Motion Assumptions -#' @examples -#' -#'library(PerformanceAnalytics) -#' data(edhec) -#' table.EmaxDDGBM(edhec) -#' @rdname table.EmaxDDGBM +#' #' @export -table.EMaxDDGBM <- +EMaxDDGBM <- function (R,digits =4) {# @author @@ -168,40 +156,19 @@ } - # return(Ed) + return(Ed[1]*100) - z = c((mu*100), - (sig*100), - (Ed*100)) - znames = c( - "Annual Returns in %", - "Std Devetions in %", - "Expected Drawdown in %" - ) - if(column == 1) { - resultingtable = data.frame(Value = z, row.names = znames) - } - else { - nextcolumn = data.frame(Value = z, row.names = znames) - resultingtable = cbind(resultingtable, nextcolumn) - } - } - colnames(resultingtable) = columnnames - ans = base::round(resultingtable, digits) - ans - - + } - +} ############################################################################### -################################################################################ -# R (http://r-project.org/) Econometrics for Performance and Risk Analysis +# R (http://r-project.org/) # -# Copyright (c) 2004-2012 Peter Carl and Brian G. Peterson +# Copyright (c) 2004-2013 # # This R package is distributed under the terms of the GNU Public License (GPL) # for full details see the file COPYING # -# $Id: EmaxDDGBM.R 2271 2012-09-02 01:56:23Z braverock $ +# $Id: EMaxDDGBM # ############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -11,19 +11,21 @@ #'Where j belongs to 0 to k,which is the number of lag factors input. #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns +#' @param ... Additional Parameters #' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @aliases Return.Geltner #' @references \emph{Getmansky, Mila, Lo, Andrew W. and Makarov, Igor} An Econometric Model of Serial Correlation and Illiquidity in Hedge Fund Returns (March 1, 2003). MIT Sloan Working Paper No. 4288-03; MIT Laboratory for Financial Engineering Working Paper No. LFE-1041A-03; EFMA 2003 Helsinki Meetings. Available at SSRN: \url{http://ssrn.com/abstract=384700} #' #' @keywords ts multivariate distribution models non-iid #' @examples +#' require(PerformanceAnalytics) +#' library(PerformanceAnalytics) +#' data(edhec) +#' GLMSmoothIndex(edhec) #' -#' data(edhec) -#' head(GLMSmoothIndex(edhec)) -#' #' @export GLMSmoothIndex<- - function(R = NULL) + function(R = NULL, ...) { columns = 1 columnnames = NULL @@ -33,32 +35,33 @@ columns = ncol(x) n = nrow(x) count = q - x=edhec - columns = ncol(x) - columnnames = colnames(x) + + columns = ncol(x) + columnnames = colnames(x) + + # Calculate AutoCorrelation Coefficient + for(column in 1:columns) { # for each asset passed in as R + y = checkData(x[,column], method="vector", na.rm = TRUE) + sum = sum(abs(acf(y,plot=FALSE,lag.max=6)[[1]][2:7])); + acflag6 = acf(y,plot=FALSE,lag.max=6)[[1]][2:7]/sum; + values = sum(acflag6*acflag6) - # Calculate AutoCorrelation Coefficient - for(column in 1:columns) { # for each asset passed in as R - y = checkData(x[,column], method="vector", na.rm = TRUE) - sum = sum(abs(acf(y,plot=FALSE,lag.max=6)[[1]][2:7])); - acflag6 = acf(y,plot=FALSE,lag.max=6)[[1]][2:7]/sum; - values = sum(acflag6*acflag6) - - if(column == 1) { - result.df = data.frame(Value = values) - colnames(result.df) = columnnames[column] - } - else { - nextcol = data.frame(Value = values) - colnames(nextcol) = columnnames[column] - result.df = cbind(result.df, nextcol) - } + if(column == 1) { + result.df = data.frame(Value = values) + colnames(result.df) = columnnames[column] } + else { + nextcol = data.frame(Value = values) + colnames(nextcol) = columnnames[column] + result.df = cbind(result.df, nextcol) + } + } rownames(result.df)= paste("GLM Smooth Index") - return(result.df) + return(result.df) } + edhec=NULL } ############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -48,7 +48,7 @@ columns.a = ncol(R) columnnames.a = colnames(R) # Time used for daily Return manipulations - Time= 252*nyears(edhec) + Time= 252*nyears(R) clean.lo <- function(column.R,q) { # compute the lagged return series gamma.k =matrix(0,q) @@ -85,7 +85,7 @@ rownames(lo)= paste("Lo Sharpe Ratio") return(lo) - + edhec=NULL # RESULTS: } Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -0,0 +1,53 @@ +#' calculate a Normalized Calmar or Sterling reward/risk ratio +#' +#' Normalized Calmar and Sterling Ratios are yet another method of creating a +#' risk-adjusted measure for ranking investments similar to the +#' \code{\link{SharpeRatio}}. +#' +#' Both the Normalized Calmar and the Sterling ratio are the ratio of annualized return +#' over the absolute value of the maximum drawdown of an investment. The +#' Sterling ratio adds an excess risk measure to the maximum drawdown, +#' traditionally and defaulting to 10\%. +#' +#' It is also traditional to use a three year return series for these +#' calculations, although the functions included here make no effort to +#' determine the length of your series. If you want to use a subset of your +#' series, you'll need to truncate or subset the input data to the desired +#' length. +#' +#' Many other measures have been proposed to do similar reward to risk ranking. +#' It is the opinion of this author that newer measures such as Sortino's +#' \code{\link{UpsidePotentialRatio}} or Favre's modified +#' \code{\link{SharpeRatio}} are both \dQuote{better} measures, and +#' should be preferred to the Calmar or Sterling Ratio. +#' +#' @aliases Normalized.CalmarRatio Normalized.SterlingRatio +#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' asset returns +#' @param tau Time Scale Translations Factor +#' @param scale number of periods in a year (daily scale = 252, monthly scale = +#' 12, quarterly scale = 4) +#' @author Shubhankit +#' @seealso +#' \code{\link{Return.annualized}}, \cr +#' \code{\link{maxDrawdown}}, \cr +#' \code{\link{SharpeRatio.modified}}, \cr +#' \code{\link{UpsidePotentialRatio}} +#' @references Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct 2004. +#' @keywords ts multivariate distribution models +#' @examples +#' +#' data(managers) +#' Normalized.CalmarRatio(managers[,1,drop=FALSE]) +#' Normalized.CalmarRatio(managers[,1:6]) +#' Normalized.SterlingRatio(managers[,1,drop=FALSE]) +#' Normalized.SterlingRatio(managers[,1:6]) +#' +#' @rdname QP.Norm +#' QP function fo calculation of Sharpe Ratio +#' @export +QP.Norm <- function (R, tau,scale = NA) +{ + Sharpe= as.numeric(SharpeRatio.annualized(R)) + return(.63519+(.5*log(tau))+log(Sharpe)) +} Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -48,7 +48,7 @@ columnnames.a = colnames(R) clean.GLM <- function(column.R,q=3) { - ma.coeff = as.numeric((arma(edhec[,1],order=c(0,q)))$coef[1:q]) + ma.coeff = as.numeric((arma(column.R,order=c(0,q)))$coef[1:q]) column.glm = ma.coeff[q]*lag(column.R,q) return(column.glm) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -16,12 +16,17 @@ #' #'To remove the \bold{first m orders} of autocorrelation from a given return series we would proceed in a manner very similar to that detailed in \bold{ \code{\link{Return.Geltner}} \cr}. We would initially remove the first order autocorrelation, then proceed to eliminate the second order autocorrelation through the iteration process. In general, to remove any order, m, autocorrelations from a given return series we would make the following transformation to returns: #' autocorrelation structure in the original return series without making any assumptions regarding the actual time series properties of the underlying process. We are implicitly assuming by this approach that the autocorrelations that arise in reported returns are entirely due to the smoothing behavior funds engage in when reporting results. In fact, the method may be adopted to produce any desired level of autocorrelation at any lag and is not limited to simply eliminating all autocorrelations. -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of +#' @param +#' R : an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns -#' @param q number of lag factors for autocorrelation +#' @param +#' q : order of autocorrelation coefficient lag factors +#' +#' +#' #' @references Okunev, John and White, Derek R., \emph{ Hedge Fund Risk Factors and Value at Risk of Credit Trading Strategies} (October 2003). #' Available at SSRN: \url{http://ssrn.com/abstract=460641} -#' @author Shubhankit Mohan +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan #' @seealso \code{\link{Return.Geltner}} \cr #' @keywords ts multivariate distribution models #' @examples @@ -33,12 +38,8 @@ #' @export Return.Okunev<-function(R,q=3) { - column.okunev=R - col=ncol(R) - for(j in 1:col){ - column.okunev[,j] <- column.okunev[,j][!is.na(column.okunev[,j])] -} + column.okunev <- column.okunev[!is.na(column.okunev)] for(i in 1:q) { lagR = lag(column.okunev, k=i) @@ -46,13 +47,40 @@ } return(c(column.okunev)) } -# Recusrsive Okunev Call Function - +#'@title OW Return Model +#'@description The objective is to determine the true underlying return by removing the +#' autocorrelation structure in the original return series without making any assumptions +#' regarding the actual time series properties of the underlying process. We are +#' implicitly assuming by this approach that the autocorrelations that arise in reported +#'returns are entirely due to the smoothing behavior funds engage in when reporting +#' results. In fact, the method may be adopted to produce any desired +#' level of autocorrelation at any lag and is not limited to simply eliminating all +#'autocorrelations.It can be be said as the general form of Geltner Return Model +#'@details +#'Given a sample of historical returns \eqn{R(1),R(2), . . .,R(T)},the method assumes the fund manager smooths returns in the following manner: +#' \deqn{ r(0,t) = \sum \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } +#' Where :\deqn{ \sum \beta (i) = (1- \alpha) } +#' \bold{r(0,t)} : is the observed (reported) return at time t (with 0 adjustments to reported returns), +#'\bold{r(m,t)} : is the true underlying (unreported) return at time t (determined by making m adjustments to reported returns). +#' +#'To remove the \bold{first m orders} of autocorrelation from a given return series we would proceed in a manner very similar to that detailed in \bold{ \code{\link{Return.Geltner}} \cr}. We would initially remove the first order autocorrelation, then proceed to eliminate the second order autocorrelation through the iteration process. In general, to remove any order, m, autocorrelations from a given return series we would make the following transformation to returns: +#' autocorrelation structure in the original return series without making any assumptions regarding the actual time series properties of the underlying process. We are implicitly assuming by this approach that the autocorrelations that arise in reported returns are entirely due to the smoothing behavior funds engage in when reporting results. In fact, the method may be adopted to produce any desired level of autocorrelation at any lag and is not limited to simply eliminating all autocorrelations. +#' @param +#' R : an xts column of +#' asset returns +#' @param +#' d : order of autocorrelation coefficient lag factors +#' @references Okunev, John and White, Derek R., \emph{ Hedge Fund Risk Factors and Value at Risk of Credit Trading Strategies} (October 2003). +#' Available at SSRN: \url{http://ssrn.com/abstract=460641} +#' @author Peter Carl, Brian Peterson, Shubhankit Mohan +#' @seealso \code{\link{Return.Geltner}} \cr +#' @keywords ts multivariate distribution models +#' @export quad <- function(R,d) { - coeff = as.numeric(acf(as.numeric(edhec[,1]), plot = FALSE)[1:2][[1]]) -b=-(1+coeff[2]-2*d*coeff[1]) -c=(coeff[1]-d) + coeff = as.numeric(acf(as.numeric(R, plot = FALSE)[1:2][[1]])) + b=-(1+coeff[2]-2*d*coeff[1]) + c=(coeff[1]-d) ans= (-b-sqrt(b*b-4*c*c))/(2*c) #a <- a[!is.na(a)] return(c(ans)) @@ -68,4 +96,3 @@ # $Id: Return.Okunev.R 2163 2012-07-16 00:30:19Z braverock $ # ############################################################################### - Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/UnsmoothReturn.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/UnsmoothReturn.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/UnsmoothReturn.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -9,13 +9,12 @@ columns = ncol(x) n = nrow(x) count = q - x=edhec columns = ncol(x) columnnames = colnames(x) # Calculate AutoCorrelation Coefficient for(column in 1:columns) { # for each asset passed in as R - y = checkData(edhec[,column], method="vector", na.rm = TRUE) + y = checkData(R[,column], method="vector", na.rm = TRUE) acflag6 = acf(y,plot=FALSE,lag.max=6)[[1]][2:7] values = sum(acflag6*acflag6)/(sum(acflag6)*sum(acflag6)) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -17,14 +17,20 @@ #' Forecasting Financial Markets, London, United Kingdom, May 1997} \url{http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf} #' @keywords Maximum Loss Simulated Drawdown #' @examples -#' library(PerformanceAnalytics) +#' require(PerformanceAnalytics) +#' library(PerformanceAnalytics) +#' data(edhec) #' chart.AcarSim(edhec) #' @rdname chart.AcarSim #' @export chart.AcarSim <- function(R) { - R = checkData(Ra, method="xts") + + require(PerformanceAnalytics) + library(PerformanceAnalytics) + data(edhec) + R = checkData(R, method="xts") # Get dimensions and labels # simulated parameters using edhec data mu=mean(Return.annualized(edhec)) @@ -79,6 +85,7 @@ title("Maximum Drawdown/Volatility as a function of Return/Volatility 36 monthly returns simulated 6,000 times") + edhec= NULL } ############################################################################### Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/maxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/maxDDGBM.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/maxDDGBM.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -1,174 +0,0 @@ -#' Expected Drawdown using Brownian Motion Assumptions -#' -#' Works on the model specified by Maddon-Ismail -#' -#' -#' -#' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of -#' asset returns - -#' @author R -#' @keywords Expected Drawdown Using Brownian Motion Assumptions -#' -#' @export -EMaxDDGBM <- - function (R,digits =4) - {# @author - - # DESCRIPTION: - # Downside Risk Summary: Statistics and Stylized Facts - - # Inputs: - # R: a regular timeseries of returns (rather than prices) - # Output: Table of Estimated Drawdowns - - y = checkData(R, method = "xts") - columns = ncol(y) - rows = nrow(y) - columnnames = colnames(y) - rownames = rownames(y) - T= nyears(y); - - # for each column, do the following: - for(column in 1:columns) { - x = y[,column] - mu = Return.annualized(x, scale = NA, geometric = TRUE) - sig=StdDev(x) - gamma<-sqrt(pi/8) - - if(mu==0){ - - Ed<-2*gamma*sig*sqrt(T) - - } - - else{ - - alpha<-mu*sqrt(T/(2*sig^2)) - - x<-alpha^2 - - if(mu>0){ - - mQp<-matrix(c( - - 0.0005, 0.0010, 0.0015, 0.0020, 0.0025, 0.0050, 0.0075, 0.0100, 0.0125, - - 0.0150, 0.0175, 0.0200, 0.0225, 0.0250, 0.0275, 0.0300, 0.0325, 0.0350, - - 0.0375, 0.0400, 0.0425, 0.0450, 0.0500, 0.0600, 0.0700, 0.0800, 0.0900, - - 0.1000, 0.2000, 0.3000, 0.4000, 0.5000, 1.5000, 2.5000, 3.5000, 4.5000, - - 10, 20, 30, 40, 50, 150, 250, 350, 450, 1000, 2000, 3000, 4000, 5000, 0.019690, - - 0.027694, 0.033789, 0.038896, 0.043372, 0.060721, 0.073808, 0.084693, 0.094171, - - 0.102651, 0.110375, 0.117503, 0.124142, 0.130374, 0.136259, 0.141842, 0.147162, - - 0.152249, 0.157127, 0.161817, 0.166337, 0.170702, 0.179015, 0.194248, 0.207999, - - 0.220581, 0.232212, 0.243050, 0.325071, 0.382016, 0.426452, 0.463159, 0.668992, - - 0.775976, 0.849298, 0.905305, 1.088998, 1.253794, 1.351794, 1.421860, 1.476457, - - 1.747485, 1.874323, 1.958037, 2.020630, 2.219765, 2.392826, 2.494109, 2.565985, - - 2.621743),ncol=2) - - - - if(x<0.0005){ - - Qp<-gamma*sqrt(2*x) - - } - - if(x>0.0005 & x<5000){ - - Qp<-spline(log(mQp[,1]),mQp[,2],n=1,xmin=log(x),xmax=log(x))$y - - } - - if(x>5000){ - - Qp<-0.25*log(x)+0.49088 - - } - - Ed<-(2*sig^2/mu)*Qp - - } - - if(mu<0){ - - mQn<-matrix(c( - - 0.0005, 0.0010, 0.0015, 0.0020, 0.0025, 0.0050, 0.0075, 0.0100, 0.0125, 0.0150, - - 0.0175, 0.0200, 0.0225, 0.0250, 0.0275, 0.0300, 0.0325, 0.0350, 0.0375, 0.0400, - - 0.0425, 0.0450, 0.0475, 0.0500, 0.0550, 0.0600, 0.0650, 0.0700, 0.0750, 0.0800, - - 0.0850, 0.0900, 0.0950, 0.1000, 0.1500, 0.2000, 0.2500, 0.3000, 0.3500, 0.4000, - - 0.5000, 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000, 4.5000, 5.0000, - - 0.019965, 0.028394, 0.034874, 0.040369, 0.045256, 0.064633, 0.079746, 0.092708, - - 0.104259, 0.114814, 0.124608, 0.133772, 0.142429, 0.150739, 0.158565, 0.166229, - - 0.173756, 0.180793, 0.187739, 0.194489, 0.201094, 0.207572, 0.213877, 0.220056, - - 0.231797, 0.243374, 0.254585, 0.265472, 0.276070, 0.286406, 0.296507, 0.306393, - - 0.316066, 0.325586, 0.413136, 0.491599, 0.564333, 0.633007, 0.698849, 0.762455, - - 0.884593, 1.445520, 1.970740, 2.483960, 2.990940, 3.492520, 3.995190, 4.492380, - - 4.990430, 5.498820),ncol=2) - - - - - - if(x<0.0005){ - - Qn<-gamma*sqrt(2*x) - - } - - if(x>0.0005 & x<5000){ - - Qn<-spline(mQn[,1],mQn[,2],n=1,xmin=x,xmax=x)$y - - } - - if(x>5000){ - - Qn<-x+0.50 - - } - - Ed<-(2*sig^2/mu)*(-Qn) - - } - - } - - return(Ed[1]*100) - - - } -} -############################################################################### -# R (http://r-project.org/) -# -# Copyright (c) 2004-2013 -# -# This R package is distributed under the terms of the GNU Public License (GPL) -# for full details see the file COPYING -# -# $Id: EMaxDDGBM -# -############################################################################### Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -48,7 +48,7 @@ columns.a = ncol(R) columnnames.a = colnames(R) # Time used for daily Return manipulations - Time= 252*nyears(edhec) + Time= 252*nyears(R) clean.lo <- function(column.R,q) { # compute the lagged return series gamma.k =matrix(0,q) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.EMaxDDGBM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.EMaxDDGBM.R 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/table.EMaxDDGBM.R 2013-08-31 10:39:53 UTC (rev 2948) @@ -20,8 +20,8 @@ #' #'library(PerformanceAnalytics) #' data(edhec) -#' table.EmaxDDGBM(edhec) -#' @rdname table.EmaxDDGBM +#' table.EMaxDDGBM(edhec) +#' @rdname table.EMaxDDGBM #' @export table.EMaxDDGBM <- function (R,digits =4) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd 2013-08-31 10:39:53 UTC (rev 2948) @@ -4,6 +4,10 @@ \usage{ AcarSim(R) } +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} +} \description{ To get some insight on the relationships between maximum drawdown per unit of volatility and mean return divided Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/EMaxDDGBM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/EMaxDDGBM.Rd 2013-08-31 00:39:21 UTC (rev 2947) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/EMaxDDGBM.Rd 2013-08-31 10:39:53 UTC (rev 2948) @@ -7,12 +7,14 @@ \arguments{ [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2948 From noreply at r-forge.r-project.org Sat Aug 31 15:30:19 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 15:30:19 +0200 (CEST) Subject: [Returnanalytics-commits] r2949 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: . R man Message-ID: <20130831133019.7ED09185122@r-forge.r-project.org> Author: braverock Date: 2013-08-31 15:30:19 +0200 (Sat, 31 Aug 2013) New Revision: 2949 Removed: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/inst/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/man/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/inst/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/noniid.sm-package.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/quad.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EmaxDDGBM.Rd Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/NAMESPACE pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/SterlingRatio.Norm.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/ACStdDev.annualized.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/LoSharpe.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.GLM.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/SterlingRatio.Norm.Rd Log: - changes to get clean package build, clean doc build, and cleaner R CMD check Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,38 +1,38 @@ -Package: noniid.sm -Type: Package -Title: Non-i.i.d. GSoC 2013 Shubhankit -Version: 0.1 -Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ -Author: Shubhankit Mohan -Contributors: Peter Carl, Brian G. Peterson -Depends: - xts, - PerformanceAnalytics, - tseries, - stats -Maintainer: Brian G. Peterson -Description: GSoC 2013 project to replicate literature on drawdowns and - non-i.i.d assumptions in finance. -License: GPL-3 -ByteCompile: TRUE -Collate: - 'AcarSim.R' - 'ACStdDev.annualized.R' - 'CalmarRatio.Norm.R' - 'CDrawdown.R' - 'chart.AcarSim.R' - 'chart.Autocorrelation.R' - 'GLMSmoothIndex.R' - 'LoSharpe.R' - 'na.skip.R' - 'noniid.sm-internal.R' - 'Return.GLM.R' - 'Return.Okunev.R' - 'se.LoSharpe.R' - 'SterlingRatio.Norm.R' - 'table.ComparitiveReturn.GLM.R' - 'table.UnsmoothReturn.R' - 'UnsmoothReturn.R' - 'EMaxDDGBM.R' - 'table.EMaxDDGBM.R' - 'QP.Norm.R' +Package: noniid.sm +Type: Package +Title: Non-i.i.d. GSoC 2013 Shubhankit +Version: 0.1 +Date: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $ +Author: Shubhankit Mohan +Contributors: Peter Carl, Brian G. Peterson +Depends: + xts, + PerformanceAnalytics, + tseries, + stats +Maintainer: Brian G. Peterson +Description: GSoC 2013 project to replicate literature on drawdowns and + non-i.i.d assumptions in finance. +License: GPL-3 +ByteCompile: TRUE +Collate: + 'AcarSim.R' + 'ACStdDev.annualized.R' + 'CalmarRatio.Norm.R' + 'CDrawdown.R' + 'chart.AcarSim.R' + 'chart.Autocorrelation.R' + 'EmaxDDGBM.R' + 'GLMSmoothIndex.R' + 'LoSharpe.R' + 'na.skip.R' + 'noniid.sm-internal.R' + 'QP.Norm.R' + 'Return.GLM.R' + 'Return.Okunev.R' + 'se.LoSharpe.R' + 'SterlingRatio.Norm.R' + 'table.ComparitiveReturn.GLM.R' + 'table.EMaxDDGBM.R' + 'table.UnsmoothReturn.R' + 'UnsmoothReturn.R' Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/NAMESPACE =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/NAMESPACE 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/NAMESPACE 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,16 +1,17 @@ -export(AcarSim) -export(ACStdDev.annualized) -export(CalmarRatio.Norm) -export(CDrawdown) -export(chart.AcarSim) -export(chart.Autocorrelation) -export(EMaxDDGBM) -export(GLMSmoothIndex) -export(LoSharpe) -export(Return.GLM) -export(Return.Okunev) -export(se.LoSharpe) -export(SterlingRatio.Norm) -export(table.ComparitiveReturn.GLM) -export(table.EMaxDDGBM) -export(table.UnsmoothReturn) +export(AcarSim) +export(ACStdDev.annualized) +export(CalmarRatio.Norm) +export(CDrawdown) +export(chart.AcarSim) +export(chart.Autocorrelation) +export(EMaxDDGBM) +export(GLMSmoothIndex) +export(LoSharpe) +export(QP.Norm) +export(Return.GLM) +export(Return.Okunev) +export(se.LoSharpe) +export(SterlingRatio.Norm) +export(table.ComparitiveReturn.GLM) +export(table.EMaxDDGBM) +export(table.UnsmoothReturn) Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R 2013-08-31 13:30:19 UTC (rev 2949) @@ -18,6 +18,7 @@ #' @keywords ts multivariate distribution models #' @examples #' library(PerformanceAnalytics) +#' data(edhec) #' ACStdDev.annualized(edhec,3) #' #' @export Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R 2013-08-31 13:30:19 UTC (rev 2949) @@ -34,7 +34,7 @@ #' @examples #' #' data(edhec) -#' head(LoSharpe(edhec,0,3) +#' head(LoSharpe(edhec,0,3)) #' @rdname LoSharpe #' @export LoSharpe <- Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,50 +1,11 @@ -#' calculate a Normalized Calmar or Sterling reward/risk ratio -#' -#' Normalized Calmar and Sterling Ratios are yet another method of creating a -#' risk-adjusted measure for ranking investments similar to the -#' \code{\link{SharpeRatio}}. -#' -#' Both the Normalized Calmar and the Sterling ratio are the ratio of annualized return -#' over the absolute value of the maximum drawdown of an investment. The -#' Sterling ratio adds an excess risk measure to the maximum drawdown, -#' traditionally and defaulting to 10\%. -#' -#' It is also traditional to use a three year return series for these -#' calculations, although the functions included here make no effort to -#' determine the length of your series. If you want to use a subset of your -#' series, you'll need to truncate or subset the input data to the desired -#' length. -#' -#' Many other measures have been proposed to do similar reward to risk ranking. -#' It is the opinion of this author that newer measures such as Sortino's -#' \code{\link{UpsidePotentialRatio}} or Favre's modified -#' \code{\link{SharpeRatio}} are both \dQuote{better} measures, and -#' should be preferred to the Calmar or Sterling Ratio. -#' -#' @aliases Normalized.CalmarRatio Normalized.SterlingRatio +#' QP function for calculation of Sharpe Ratio +#' #' @param R an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns #' @param tau Time Scale Translations Factor #' @param scale number of periods in a year (daily scale = 252, monthly scale = -#' 12, quarterly scale = 4) -#' @author Shubhankit #' @seealso -#' \code{\link{Return.annualized}}, \cr -#' \code{\link{maxDrawdown}}, \cr -#' \code{\link{SharpeRatio.modified}}, \cr -#' \code{\link{UpsidePotentialRatio}} -#' @references Bacon, Carl. \emph{Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct 2004. -#' @keywords ts multivariate distribution models -#' @examples -#' -#' data(managers) -#' Normalized.CalmarRatio(managers[,1,drop=FALSE]) -#' Normalized.CalmarRatio(managers[,1:6]) -#' Normalized.SterlingRatio(managers[,1,drop=FALSE]) -#' Normalized.SterlingRatio(managers[,1:6]) -#' -#' @rdname QP.Norm -#' QP function fo calculation of Sharpe Ratio +#' \code{\link{CalmarRatio.Norm}}, \cr #' @export QP.Norm <- function (R, tau,scale = NA) { Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,11 +1,12 @@ -#' @title GLM Return Model -#' @description True returns represent the flow of information that would determine the equilibrium +#' GLM Return Model +#' +#' True returns represent the flow of information that would determine the equilibrium #' value of the fund's securities in a frictionless market. However, true economic #' returns are not observed. The returns to hedge funds and other alternative investments are often #' highly serially correlated.We propose an econometric model of return smoothing and \emph{develop estimators for the smoothing #' profile as well as a smoothing-adjusted Sharpe ratio}. #' @examples -#' library(PerformanceAnalytics) +#' data(edhec) #' Return.GLM(edhec,4) #' @param #' Ra : an xts, vector, matrix, data frame, timeSeries or zoo object of Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,21 +1,38 @@ -#'@title OW Return Model -#'@description The objective is to determine the true underlying return by removing the +#' Okunev and White Return Model +#' +#' The objective is to determine the true underlying return by removing the #' autocorrelation structure in the original return series without making any assumptions #' regarding the actual time series properties of the underlying process. We are #' implicitly assuming by this approach that the autocorrelations that arise in reported -#'returns are entirely due to the smoothing behavior funds engage in when reporting +#' returns are entirely due to the smoothing behavior funds engage in when reporting #' results. In fact, the method may be adopted to produce any desired #' level of autocorrelation at any lag and is not limited to simply eliminating all -#'autocorrelations.It can be be said as the general form of Geltner Return Model +#' autocorrelations.It can be be said as the general form of Geltner Return Model +#' #'@details -#'Given a sample of historical returns \eqn{R(1),R(2), . . .,R(T)},the method assumes the fund manager smooths returns in the following manner: +#' Given a sample of historical returns \eqn{R(1),R(2), . . .,R(T)}, +#' the method assumes the fund manager smooths returns in the following manner: #' \deqn{ r(0,t) = \sum \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } #' Where :\deqn{ \sum \beta (i) = (1- \alpha) } #' \bold{r(0,t)} : is the observed (reported) return at time t (with 0 adjustments to reported returns), -#'\bold{r(m,t)} : is the true underlying (unreported) return at time t (determined by making m adjustments to reported returns). +#' \bold{r(m,t)} : is the true underlying (unreported) return at +#' time t (determined by making m adjustments to reported returns). #' -#'To remove the \bold{first m orders} of autocorrelation from a given return series we would proceed in a manner very similar to that detailed in \bold{ \code{\link{Return.Geltner}} \cr}. We would initially remove the first order autocorrelation, then proceed to eliminate the second order autocorrelation through the iteration process. In general, to remove any order, m, autocorrelations from a given return series we would make the following transformation to returns: -#' autocorrelation structure in the original return series without making any assumptions regarding the actual time series properties of the underlying process. We are implicitly assuming by this approach that the autocorrelations that arise in reported returns are entirely due to the smoothing behavior funds engage in when reporting results. In fact, the method may be adopted to produce any desired level of autocorrelation at any lag and is not limited to simply eliminating all autocorrelations. +#' To remove the \bold{first m orders} of autocorrelation from a given return +#' series we would proceed in a manner very similar to that detailed in +#' \bold{\code{\link{Return.Geltner}} \cr}. We would initially remove the first order +#' autocorrelation, then proceed to eliminate the second order autocorrelation +#' through the iteration process. In general, to remove any order, m, +#' autocorrelations from a given return series we would make the following +#' transformation to returns: +#' autocorrelation structure in the original return series without making any +#' assumptions regarding the actual time series properties of the underlying +#' process. We are implicitly assuming by this approach that the autocorrelations +#' that arise in reported returns are entirely due to the smoothing behavior funds +#' engage in when reporting results. In fact, the method may be adopted to produce +#' any desired level of autocorrelation at any lag and is not limited to simply +#' eliminating all autocorrelations. +#' #' @param #' R : an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns @@ -32,7 +49,7 @@ #' @examples #' #' data(managers) -#' head(Return.Okunev(managers[,1:3]),n=3) +#' Return.Okunev(managers) #' #' #' @export @@ -47,35 +64,8 @@ } return(c(column.okunev)) } -#'@title OW Return Model -#'@description The objective is to determine the true underlying return by removing the -#' autocorrelation structure in the original return series without making any assumptions -#' regarding the actual time series properties of the underlying process. We are -#' implicitly assuming by this approach that the autocorrelations that arise in reported -#'returns are entirely due to the smoothing behavior funds engage in when reporting -#' results. In fact, the method may be adopted to produce any desired -#' level of autocorrelation at any lag and is not limited to simply eliminating all -#'autocorrelations.It can be be said as the general form of Geltner Return Model -#'@details -#'Given a sample of historical returns \eqn{R(1),R(2), . . .,R(T)},the method assumes the fund manager smooths returns in the following manner: -#' \deqn{ r(0,t) = \sum \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } -#' Where :\deqn{ \sum \beta (i) = (1- \alpha) } -#' \bold{r(0,t)} : is the observed (reported) return at time t (with 0 adjustments to reported returns), -#'\bold{r(m,t)} : is the true underlying (unreported) return at time t (determined by making m adjustments to reported returns). -#' -#'To remove the \bold{first m orders} of autocorrelation from a given return series we would proceed in a manner very similar to that detailed in \bold{ \code{\link{Return.Geltner}} \cr}. We would initially remove the first order autocorrelation, then proceed to eliminate the second order autocorrelation through the iteration process. In general, to remove any order, m, autocorrelations from a given return series we would make the following transformation to returns: -#' autocorrelation structure in the original return series without making any assumptions regarding the actual time series properties of the underlying process. We are implicitly assuming by this approach that the autocorrelations that arise in reported returns are entirely due to the smoothing behavior funds engage in when reporting results. In fact, the method may be adopted to produce any desired level of autocorrelation at any lag and is not limited to simply eliminating all autocorrelations. -#' @param -#' R : an xts column of -#' asset returns -#' @param -#' d : order of autocorrelation coefficient lag factors -#' @references Okunev, John and White, Derek R., \emph{ Hedge Fund Risk Factors and Value at Risk of Credit Trading Strategies} (October 2003). -#' Available at SSRN: \url{http://ssrn.com/abstract=460641} -#' @author Peter Carl, Brian Peterson, Shubhankit Mohan -#' @seealso \code{\link{Return.Geltner}} \cr -#' @keywords ts multivariate distribution models -#' @export + +#helper function for Return.Okunev, not exported quad <- function(R,d) { coeff = as.numeric(acf(as.numeric(R, plot = FALSE)[1:2][[1]])) @@ -85,6 +75,7 @@ #a <- a[!is.na(a)] return(c(ans)) } + ############################################################################### # R (http://r-project.org/) Econometrics for Performance and Risk Analysis # Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/SterlingRatio.Norm.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/SterlingRatio.Norm.R 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/SterlingRatio.Norm.R 2013-08-31 13:30:19 UTC (rev 2949) @@ -9,7 +9,7 @@ #' Sterling ratio adds an \bold{excess risk} measure to the maximum drawdown, #' traditionally and defaulting to 10\%. #' -#' \deqn{Sterling Ratio = [Return over (0,T)]/[max Drawdown(0,T) - 10%]} +#' \deqn{Sterling Ratio = [Return over (0,T)]/[max Drawdown(0,T) - 10\%]} #' It is also \emph{traditional} to use a three year return series for these #' calculations, although the functions included here make no effort to #' determine the length of your series. If you want to use a subset of your Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/ACStdDev.annualized.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/ACStdDev.annualized.Rd 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/ACStdDev.annualized.Rd 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,52 +1,53 @@ -\name{ACStdDev.annualized} -\alias{ACStdDev.annualized} -\alias{sd.annualized} -\alias{sd.multiperiod} -\alias{StdDev.annualized} -\title{Autocorrleation adjusted Standard Deviation} -\usage{ - ACStdDev.annualized(R, lag = 6, scale = NA, ...) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{lag}{: number of autocorrelated lag factors - inputted by user} - - \item{scale}{number of periods in a year (daily scale = - 252, monthly scale = 12, quarterly scale = 4)} - - \item{\dots}{any other passthru parameters} -} -\description{ - Incorporating the component of lagged autocorrelation - factor into adjusted time scale standard deviation - translation -} -\details{ - Given a sample of historical returns R(1),R(2), . . - .,R(T),the method assumes the fund manager smooths - returns in the following manner, when 't' is the unit - time interval: The square root time translation can be - defined as : \deqn{ \sigma(T) = T \sqrt\sigma(t)} -} -\examples{ -library(PerformanceAnalytics) -ACStdDev.annualized(edhec,3) -} -\author{ - Peter Carl,Brian Peterson, Shubhankit Mohan - \url{http://en.wikipedia.org/wiki/Volatility_(finance)} -} -\references{ - Burghardt, G., and L. Liu, \emph{ It's the - Autocorrelation, Stupid (November 2012) Newedge working - paper.} - \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - +\name{ACStdDev.annualized} +\alias{ACStdDev.annualized} +\alias{sd.annualized} +\alias{sd.multiperiod} +\alias{StdDev.annualized} +\title{Autocorrleation adjusted Standard Deviation} +\usage{ + ACStdDev.annualized(R, lag = 6, scale = NA, ...) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{lag}{: number of autocorrelated lag factors + inputted by user} + + \item{scale}{number of periods in a year (daily scale = + 252, monthly scale = 12, quarterly scale = 4)} + + \item{\dots}{any other passthru parameters} +} +\description{ + Incorporating the component of lagged autocorrelation + factor into adjusted time scale standard deviation + translation +} +\details{ + Given a sample of historical returns R(1),R(2), . . + .,R(T),the method assumes the fund manager smooths + returns in the following manner, when 't' is the unit + time interval: The square root time translation can be + defined as : \deqn{ \sigma(T) = T \sqrt\sigma(t)} +} +\examples{ +library(PerformanceAnalytics) +data(edhec) +ACStdDev.annualized(edhec,3) +} +\author{ + Peter Carl,Brian Peterson, Shubhankit Mohan + \url{http://en.wikipedia.org/wiki/Volatility_(finance)} +} +\references{ + Burghardt, G., and L. Liu, \emph{ It's the + Autocorrelation, Stupid (November 2012) Newedge working + paper.} + \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/LoSharpe.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/LoSharpe.Rd 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/LoSharpe.Rd 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,70 +1,70 @@ -\name{LoSharpe} -\alias{LoSharpe} -\title{Andrew Lo Sharpe Ratio} -\usage{ - LoSharpe(Ra, Rf = 0, q = 3, ...) -} -\arguments{ - \item{Ra}{an xts, vector, matrix, data frame, timeSeries - or zoo object of daily asset returns} - - \item{Rf}{an xts, vector, matrix, data frame, timeSeries - or zoo object of annualized Risk Free Rate} - - \item{q}{Number of autocorrelated lag periods. Taken as 3 - (Default)} - - \item{\dots}{any other pass thru parameters} -} -\description{ - Although the Sharpe ratio has become part of the canon of - modern financial analysis, its applications typically do - not account for the fact that it is an estimated - quantity, subject to estimation errors that can be - substantial in some cases. - - Many studies have documented various violations of the - assumption of IID returns for financial securities. - - Under the assumption of stationarity,a version of the - Central Limit Theorem can still be applied to the - estimator . -} -\details{ - The relationship between SR and SR(q) is somewhat more - involved for non- IID returns because the variance of - Rt(q) is not just the sum of the variances of component - returns but also includes all the covariances. - Specifically, under the assumption that returns \eqn{R_t} - are stationary, \deqn{ Var[(R_t)] = \sum \sum - Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum - (q-k)\rho(k) } Where \eqn{ \rho(k) = - Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order - autocorrelation coefficient of the series of returns.This - yields the following relationship between SR and SR(q): - and i,j belongs to 0 to q-1 \deqn{SR(q) = \eta(q) } Where - : \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 - \sum(q-k)\rho(k)] } Where k belongs to 0 to q-1 -} -\examples{ -data(edhec) -head(LoSharpe(edhec,0,3) -} -\author{ - Brian G. Peterson, Peter Carl, Shubhankit Mohan -} -\references{ - Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ - An Econometric Model of Serial Correlation and - Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT - Sloan Working Paper No. 4288-03; MIT Laboratory for - Financial Engineering Working Paper No. LFE-1041A-03; - EFMA 2003 Helsinki Meetings. - \url{http://ssrn.com/abstract=384700} -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{non-iid} -\keyword{ts} - +\name{LoSharpe} +\alias{LoSharpe} +\title{Andrew Lo Sharpe Ratio} +\usage{ + LoSharpe(Ra, Rf = 0, q = 3, ...) +} +\arguments{ + \item{Ra}{an xts, vector, matrix, data frame, timeSeries + or zoo object of daily asset returns} + + \item{Rf}{an xts, vector, matrix, data frame, timeSeries + or zoo object of annualized Risk Free Rate} + + \item{q}{Number of autocorrelated lag periods. Taken as 3 + (Default)} + + \item{\dots}{any other pass thru parameters} +} +\description{ + Although the Sharpe ratio has become part of the canon of + modern financial analysis, its applications typically do + not account for the fact that it is an estimated + quantity, subject to estimation errors that can be + substantial in some cases. + + Many studies have documented various violations of the + assumption of IID returns for financial securities. + + Under the assumption of stationarity,a version of the + Central Limit Theorem can still be applied to the + estimator . +} +\details{ + The relationship between SR and SR(q) is somewhat more + involved for non- IID returns because the variance of + Rt(q) is not just the sum of the variances of component + returns but also includes all the covariances. + Specifically, under the assumption that returns \eqn{R_t} + are stationary, \deqn{ Var[(R_t)] = \sum \sum + Cov(R(t-i),R(t-j)) = q{\sigma^2} + 2{\sigma^2} \sum + (q-k)\rho(k) } Where \eqn{ \rho(k) = + Cov(R(t),R(t-k))/Var[(R_t)]} is the \eqn{k^{th}} order + autocorrelation coefficient of the series of returns.This + yields the following relationship between SR and SR(q): + and i,j belongs to 0 to q-1 \deqn{SR(q) = \eta(q) } Where + : \deqn{ }{\eta(q) = [q]/[\sqrt(q\sigma^2) + 2\sigma^2 + \sum(q-k)\rho(k)] } Where k belongs to 0 to q-1 +} +\examples{ +data(edhec) +head(LoSharpe(edhec,0,3)) +} +\author{ + Brian G. Peterson, Peter Carl, Shubhankit Mohan +} +\references{ + Getmansky, Mila, Lo, Andrew W. and Makarov, Igor,\emph{ + An Econometric Model of Serial Correlation and + Illiquidity in Hedge Fund Returns} (March 1, 2003). MIT + Sloan Working Paper No. 4288-03; MIT Laboratory for + Financial Engineering Working Paper No. LFE-1041A-03; + EFMA 2003 Helsinki Meetings. + \url{http://ssrn.com/abstract=384700} +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{non-iid} +\keyword{ts} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.GLM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.GLM.Rd 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.GLM.Rd 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,66 +1,66 @@ -\name{Return.GLM} -\alias{Return.GLM} -\title{GLM Return Model} -\usage{ - Return.GLM(Ra, q = 3) -} -\arguments{ - \item{Ra}{: an xts, vector, matrix, data frame, - timeSeries or zoo object of asset returns} - - \item{q}{: order of autocorrelation coefficient lag - factors} -} -\description{ - True returns represent the flow of information that would - determine the equilibrium value of the fund's securities - in a frictionless market. However, true economic returns - are not observed. The returns to hedge funds and other - alternative investments are often highly serially - correlated.We propose an econometric model of return - smoothing and \emph{develop estimators for the smoothing - profile as well as a smoothing-adjusted Sharpe ratio}. -} -\details{ - To quantify the impact of all of these possible sources - of serial correlation, denote by R(t) the true economic - return of a hedge fund in period 't'; and let R(t) - satisfy the following linear single-factor model: where: - \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + - \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} Where : - \eqn{\theta}'i is defined as the weighted lag of - autocorrelated lag and whose sum is 1. \deqn{\theta (j) - \epsilon [0,1] where : j = 0,1,....,k } and, \deqn{\theta - _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1} Using - the methods outlined above , the paper estimates the - smoothing model using maximumlikelihood - procedure-programmed in Matlab using the Optimization - Toolbox andreplicated in Stata usingits MA(k) estimation - routine.Using Time seseries analysis and computational - finance("\bold{tseries}") library , we fit an it an - \bold{ARMA} model to a univariate time series by - conditional least squares. For exact maximum likelihood - estimation,arima0 from package \bold{stats} can be used. -} -\examples{ -library(PerformanceAnalytics) -Return.GLM(edhec,4) -} -\author{ - Brian Peterson,Peter Carl, Shubhankit Mohan -} -\references{ - Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An - econometric model of serial correlation and and - illiquidity in hedge fund Returns},Journal of Financial - Economics 74 (2004).\url{ - http://ssrn.com/abstract=384700} -} -\seealso{ - Return.Geltner -} -\keyword{distribution} -\keyword{model} -\keyword{multivariate} -\keyword{ts} - +\name{Return.GLM} +\alias{Return.GLM} +\title{GLM Return Model} +\usage{ + Return.GLM(Ra, q = 3) +} +\arguments{ + \item{Ra}{: an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns} + + \item{q}{: order of autocorrelation coefficient lag + factors} +} +\description{ + True returns represent the flow of information that would + determine the equilibrium value of the fund's securities + in a frictionless market. However, true economic returns + are not observed. The returns to hedge funds and other + alternative investments are often highly serially + correlated.We propose an econometric model of return + smoothing and \emph{develop estimators for the smoothing + profile as well as a smoothing-adjusted Sharpe ratio}. +} +\details{ + To quantify the impact of all of these possible sources + of serial correlation, denote by R(t) the true economic + return of a hedge fund in period 't'; and let R(t) + satisfy the following linear single-factor model: where: + \deqn{R(0,t) = \theta_{0}R(t) + \theta_{1}R(t-1) + + \theta_{2}R(t-2) .... + \theta_{k}R(t-k)} Where : + \eqn{\theta}'i is defined as the weighted lag of + autocorrelated lag and whose sum is 1. \deqn{\theta (j) + \epsilon [0,1] where : j = 0,1,....,k } and, \deqn{\theta + _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1} Using + the methods outlined above , the paper estimates the + smoothing model using maximumlikelihood + procedure-programmed in Matlab using the Optimization + Toolbox andreplicated in Stata usingits MA(k) estimation + routine.Using Time seseries analysis and computational + finance("\bold{tseries}") library , we fit an it an + \bold{ARMA} model to a univariate time series by + conditional least squares. For exact maximum likelihood + estimation,arima0 from package \bold{stats} can be used. +} +\examples{ +data(edhec) +Return.GLM(edhec,4) +} +\author{ + Brian Peterson,Peter Carl, Shubhankit Mohan +} +\references{ + Mila Getmansky, Andrew W. Lo, Igor Makarov,\emph{An + econometric model of serial correlation and and + illiquidity in hedge fund Returns},Journal of Financial + Economics 74 (2004).\url{ + http://ssrn.com/abstract=384700} +} +\seealso{ + Return.Geltner +} +\keyword{distribution} +\keyword{model} +\keyword{multivariate} +\keyword{ts} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd 2013-08-31 10:39:53 UTC (rev 2948) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd 2013-08-31 13:30:19 UTC (rev 2949) @@ -1,78 +1,79 @@ -\name{Return.Okunev} -\alias{Return.Okunev} -\title{OW Return Model} -\usage{ - Return.Okunev(R, q = 3) -} -\arguments{ - \item{R}{: an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{q}{: order of autocorrelation coefficient lag - factors} -} -\description{ - The objective is to determine the true underlying return - by removing the autocorrelation structure in the original - return series without making any assumptions regarding - the actual time series properties of the underlying - process. We are implicitly assuming by this approach that - the autocorrelations that arise in reported returns are - entirely due to the smoothing behavior funds engage in - when reporting results. In fact, the method may be - adopted to produce any desired level of autocorrelation - at any lag and is not limited to simply eliminating all - autocorrelations.It can be be said as the general form of - Geltner Return Model -} -\details{ - Given a sample of historical returns \eqn{R(1),R(2), . . - .,R(T)},the method assumes the fund manager smooths - returns in the following manner: \deqn{ r(0,t) = \sum - \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } Where :\deqn{ - \sum \beta (i) = (1- \alpha) } \bold{r(0,t)} : is the - observed (reported) return at time t (with 0 adjustments - to reported returns), \bold{r(m,t)} : is the true - underlying (unreported) return at time t (determined by - making m adjustments to reported returns). - - To remove the \bold{first m orders} of autocorrelation - from a given return series we would proceed in a manner - very similar to that detailed in \bold{ - \code{\link{Return.Geltner}} \cr}. We would initially - remove the first order autocorrelation, then proceed to - eliminate the second order autocorrelation through the - iteration process. In general, to remove any order, m, - autocorrelations from a given return series we would make - the following transformation to returns: autocorrelation - structure in the original return series without making - any assumptions regarding the actual time series - properties of the underlying process. We are implicitly - assuming by this approach that the autocorrelations that - arise in reported returns are entirely due to the - smoothing behavior funds engage in when reporting - results. In fact, the method may be adopted to produce - any desired level of autocorrelation at any lag and is - not limited to simply eliminating all autocorrelations. -} [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2949 From noreply at r-forge.r-project.org Sat Aug 31 15:44:16 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 15:44:16 +0200 (CEST) Subject: [Returnanalytics-commits] r2950 - pkg/PerformanceAnalytics/sandbox/pulkit Message-ID: <20130831134416.AAF6A1801C7@r-forge.r-project.org> Author: braverock Date: 2013-08-31 15:44:16 +0200 (Sat, 31 Aug 2013) New Revision: 2950 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION Log: - fix Collate Modified: pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-31 13:30:19 UTC (rev 2949) +++ pkg/PerformanceAnalytics/sandbox/pulkit/DESCRIPTION 2013-08-31 13:44:16 UTC (rev 2950) @@ -34,6 +34,7 @@ 'MaxDD.R' 'MinTRL.R' 'MonteSimulTriplePenance.R' + 'na.skip.R' 'ProbSharpeRatio.R' 'PSRopt.R' 'REDDCOPS.R' @@ -44,7 +45,3 @@ 'table.PSR.R' 'TriplePenance.R' 'TuW.R' - 'na.skip.R' - 'capm_aorda.R' - 'psr_python.R' - 'ret.R' From noreply at r-forge.r-project.org Sat Aug 31 16:03:26 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 16:03:26 +0200 (CEST) Subject: [Returnanalytics-commits] r2951 - pkg/PerformanceAnalytics/sandbox/pulkit/R Message-ID: <20130831140326.C9154185C52@r-forge.r-project.org> Author: braverock Date: 2013-08-31 16:03:26 +0200 (Sat, 31 Aug 2013) New Revision: 2951 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R Log: - fix Return.portfolio typo Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-31 13:44:16 UTC (rev 2950) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/DrawdownBeta.R 2013-08-31 14:03:26 UTC (rev 2951) @@ -82,7 +82,7 @@ p = 1 } if(!is.null(weights)){ - x = Returns.portfolio(R,weights) + x = Return.portfolio(R,weights) } if(geometric){ cumul_x = cumprod(x+1)-1 Modified: pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-31 13:44:16 UTC (rev 2950) +++ pkg/PerformanceAnalytics/sandbox/pulkit/R/Drawdownalpha.R 2013-08-31 14:03:26 UTC (rev 2951) @@ -66,7 +66,7 @@ xm = checkData(Rm) beta = BetaDrawdown(R,Rm,p = p,weights=weights,geometric=geometric,type=type,...) if(!is.null(weights)){ - x = Returns.portfolio(R,weights) + x = Return.portfolio(R,weights) } if(geometric){ cumul_x = cumprod(x+1)-1 From noreply at r-forge.r-project.org Sat Aug 31 18:30:59 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 18:30:59 +0200 (CEST) Subject: [Returnanalytics-commits] r2952 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: . R man Message-ID: <20130831163059.F0F4D185C4C@r-forge.r-project.org> Author: shubhanm Date: 2013-08-31 18:30:59 +0200 (Sat, 31 Aug 2013) New Revision: 2952 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/inst/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/QP.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EMaxDDGBM.Rd Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/se.LoSharpe.Rd Log: Clean Return.Okunev.R Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R 2013-08-31 14:03:26 UTC (rev 2951) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R 2013-08-31 16:30:59 UTC (rev 2952) @@ -1,40 +1,23 @@ -#' Okunev and White Return Model -#' -#' The objective is to determine the true underlying return by removing the +#'@title OW Return Model +#'@description The objective is to determine the true underlying return by removing the #' autocorrelation structure in the original return series without making any assumptions #' regarding the actual time series properties of the underlying process. We are #' implicitly assuming by this approach that the autocorrelations that arise in reported -#' returns are entirely due to the smoothing behavior funds engage in when reporting +#'returns are entirely due to the smoothing behavior funds engage in when reporting #' results. In fact, the method may be adopted to produce any desired #' level of autocorrelation at any lag and is not limited to simply eliminating all -#' autocorrelations.It can be be said as the general form of Geltner Return Model -#' +#'autocorrelations.It can be be said as the general form of Geltner Return Model #'@details -#' Given a sample of historical returns \eqn{R(1),R(2), . . .,R(T)}, -#' the method assumes the fund manager smooths returns in the following manner: +#'Given a sample of historical returns \eqn{R(1),R(2), . . .,R(T)},the method assumes the fund manager smooths returns in the following manner: #' \deqn{ r(0,t) = \sum \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } #' Where :\deqn{ \sum \beta (i) = (1- \alpha) } #' \bold{r(0,t)} : is the observed (reported) return at time t (with 0 adjustments to reported returns), -#' \bold{r(m,t)} : is the true underlying (unreported) return at -#' time t (determined by making m adjustments to reported returns). +#'\bold{r(m,t)} : is the true underlying (unreported) return at time t (determined by making m adjustments to reported returns). #' -#' To remove the \bold{first m orders} of autocorrelation from a given return -#' series we would proceed in a manner very similar to that detailed in -#' \bold{\code{\link{Return.Geltner}} \cr}. We would initially remove the first order -#' autocorrelation, then proceed to eliminate the second order autocorrelation -#' through the iteration process. In general, to remove any order, m, -#' autocorrelations from a given return series we would make the following -#' transformation to returns: -#' autocorrelation structure in the original return series without making any -#' assumptions regarding the actual time series properties of the underlying -#' process. We are implicitly assuming by this approach that the autocorrelations -#' that arise in reported returns are entirely due to the smoothing behavior funds -#' engage in when reporting results. In fact, the method may be adopted to produce -#' any desired level of autocorrelation at any lag and is not limited to simply -#' eliminating all autocorrelations. -#' +#'To remove the \bold{first m orders} of autocorrelation from a given return series we would proceed in a manner very similar to that detailed in \bold{ \code{\link{Return.Geltner}} \cr}. We would initially remove the first order autocorrelation, then proceed to eliminate the second order autocorrelation through the iteration process. In general, to remove any order, m, autocorrelations from a given return series we would make the following transformation to returns: +#' autocorrelation structure in the original return series without making any assumptions regarding the actual time series properties of the underlying process. We are implicitly assuming by this approach that the autocorrelations that arise in reported returns are entirely due to the smoothing behavior funds engage in when reporting results. In fact, the method may be adopted to produce any desired level of autocorrelation at any lag and is not limited to simply eliminating all autocorrelations. #' @param -#' R : an xts, vector, matrix, data frame, timeSeries or zoo object of +#' Ra : an xts, vector, matrix, data frame, timeSeries or zoo object of #' asset returns #' @param #' q : order of autocorrelation coefficient lag factors @@ -49,33 +32,57 @@ #' @examples #' #' data(managers) -#' Return.Okunev(managers) +#' head(Return.Okunev(managers[,1:3]),n=3) #' #' #' @export -Return.Okunev<-function(R,q=3) +Return.Okunev<-function(Ra,q=3) { - column.okunev=R - column.okunev <- column.okunev[!is.na(column.okunev)] - for(i in 1:q) + R = checkData(Ra, method="xts") + # Get dimensions and labels + columns.a = ncol(R) + columnnames.a = colnames(R) + clean.okunev <- function(column.R) { + # compute the lagged return series + + lagR = na.omit(lag(column.R,1)) + # compute the first order autocorrelation + column.R= (column.R-(lagR*quad(lagR,0)))/(1-quad(lagR,0)) + return(column.R) + } + + quad <- function(R,d) { - lagR = lag(column.okunev, k=i) - column.okunev= (column.okunev-(lagR*quad(lagR,0)))/(1-quad(lagR,0)) + coeff = as.numeric(acf(as.numeric(R), plot = FALSE)[1:2][[1]]) + b=-(1+coeff[2]-2*d*coeff[1]) + c=(coeff[1]-d) + ans= (-b-sqrt(b*b-4*c*c))/(2*c) + #a <- a[!is.na(a)] + return(c(ans)) } - return(c(column.okunev)) + + + for(column.a in 1:columns.a) { # for each asset passed in as R + # clean the data and get rid of NAs + column.okunev=R[,column.a] + for(i in 1:q) + { + column.okunev <- clean.okunev(column.okunev) + column.okunev=na.omit(column.okunev) + } + if(column.a == 1) { okunev = column.okunev } + else { okunev = cbind (okunev, column.okunev) } + + } + + #return(c(column.okunev)) + colnames(okunev) = columnnames.a + + # RESULTS: + return(reclass(okunev,match.to=Ra)) + } -#helper function for Return.Okunev, not exported -quad <- function(R,d) -{ - coeff = as.numeric(acf(as.numeric(R, plot = FALSE)[1:2][[1]])) - b=-(1+coeff[2]-2*d*coeff[1]) - c=(coeff[1]-d) - ans= (-b-sqrt(b*b-4*c*c))/(2*c) - #a <- a[!is.na(a)] - return(c(ans)) -} - ############################################################################### # R (http://r-project.org/) Econometrics for Performance and Risk Analysis # Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R 2013-08-31 14:03:26 UTC (rev 2951) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/se.LoSharpe.R 2013-08-31 16:30:59 UTC (rev 2952) @@ -34,7 +34,7 @@ #' @examples #' #' data(edhec) -#' head(se.LoSharpe(edhec,0,3) +#' se.LoSharpe(edhec,0,3) #' @rdname se.LoSharpe #' @export se.LoSharpe <- Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/QP.Norm.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/QP.Norm.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/QP.Norm.Rd 2013-08-31 16:30:59 UTC (rev 2952) @@ -0,0 +1,22 @@ +\name{QP.Norm} +\alias{QP.Norm} +\title{QP function for calculation of Sharpe Ratio} +\usage{ + QP.Norm(R, tau, scale = NA) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{tau}{Time Scale Translations Factor} + + \item{scale}{number of periods in a year (daily scale = + 252, monthly scale =} +} +\description{ + QP function for calculation of Sharpe Ratio +} +\seealso{ + \code{\link{CalmarRatio.Norm}}, \cr +} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd 2013-08-31 14:03:26 UTC (rev 2951) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/Return.Okunev.Rd 2013-08-31 16:30:59 UTC (rev 2952) @@ -1,79 +1,78 @@ -\name{Return.Okunev} -\alias{Return.Okunev} -\title{Okunev and White Return Model} -\usage{ - Return.Okunev(R, q = 3) -} -\arguments{ - \item{R}{: an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{q}{: order of autocorrelation coefficient lag - factors} -} -\description{ - The objective is to determine the true underlying return - by removing the autocorrelation structure in the original - return series without making any assumptions regarding - the actual time series properties of the underlying - process. We are implicitly assuming by this approach that - the autocorrelations that arise in reported returns are - entirely due to the smoothing behavior funds engage in - when reporting results. In fact, the method may be - adopted to produce any desired level of autocorrelation - at any lag and is not limited to simply eliminating all - autocorrelations.It can be be said as the general form of - Geltner Return Model -} -\details{ - Given a sample of historical returns \eqn{R(1),R(2), . . - .,R(T)}, the method assumes the fund manager smooths - returns in the following manner: \deqn{ r(0,t) = \sum - \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } Where :\deqn{ - \sum \beta (i) = (1- \alpha) } \bold{r(0,t)} : is the - observed (reported) return at time t (with 0 adjustments - to reported returns), \bold{r(m,t)} : is the true - underlying (unreported) return at time t (determined by - making m adjustments to reported returns). - - To remove the \bold{first m orders} of autocorrelation - from a given return series we would proceed in a manner - very similar to that detailed in - \bold{\code{\link{Return.Geltner}} \cr}. We would - initially remove the first order autocorrelation, then - proceed to eliminate the second order autocorrelation - through the iteration process. In general, to remove any - order, m, autocorrelations from a given return series we - would make the following transformation to returns: - autocorrelation structure in the original return series - without making any assumptions regarding the actual time - series properties of the underlying process. We are - implicitly assuming by this approach that the - autocorrelations that arise in reported returns are - entirely due to the smoothing behavior funds engage in - when reporting results. In fact, the method may be - adopted to produce any desired level of autocorrelation - at any lag and is not limited to simply eliminating all - autocorrelations. -} -\examples{ -data(managers) -Return.Okunev(managers) -} -\author{ - Peter Carl, Brian Peterson, Shubhankit Mohan -} -\references{ - Okunev, John and White, Derek R., \emph{ Hedge Fund Risk - Factors and Value at Risk of Credit Trading Strategies} - (October 2003). Available at SSRN: - \url{http://ssrn.com/abstract=460641} -} -\seealso{ - \code{\link{Return.Geltner}} \cr -} -\keyword{distribution} -\keyword{models} -\keyword{multivariate} -\keyword{ts} - +\name{Return.Okunev} +\alias{Return.Okunev} +\title{OW Return Model} +\usage{ + Return.Okunev(Ra, q = 3) +} +\arguments{ + \item{Ra}{: an xts, vector, matrix, data frame, + timeSeries or zoo object of asset returns} + + \item{q}{: order of autocorrelation coefficient lag + factors} +} +\description{ + The objective is to determine the true underlying return + by removing the autocorrelation structure in the original + return series without making any assumptions regarding + the actual time series properties of the underlying + process. We are implicitly assuming by this approach that + the autocorrelations that arise in reported returns are + entirely due to the smoothing behavior funds engage in + when reporting results. In fact, the method may be + adopted to produce any desired level of autocorrelation + at any lag and is not limited to simply eliminating all + autocorrelations.It can be be said as the general form of + Geltner Return Model +} +\details{ + Given a sample of historical returns \eqn{R(1),R(2), . . + .,R(T)},the method assumes the fund manager smooths + returns in the following manner: \deqn{ r(0,t) = \sum + \beta (i) r(0,t-i) + (1- \alpha)r(m,t) } Where :\deqn{ + \sum \beta (i) = (1- \alpha) } \bold{r(0,t)} : is the + observed (reported) return at time t (with 0 adjustments + to reported returns), \bold{r(m,t)} : is the true + underlying (unreported) return at time t (determined by + making m adjustments to reported returns). + + To remove the \bold{first m orders} of autocorrelation + from a given return series we would proceed in a manner + very similar to that detailed in \bold{ + \code{\link{Return.Geltner}} \cr}. We would initially + remove the first order autocorrelation, then proceed to + eliminate the second order autocorrelation through the + iteration process. In general, to remove any order, m, + autocorrelations from a given return series we would make + the following transformation to returns: autocorrelation + structure in the original return series without making + any assumptions regarding the actual time series + properties of the underlying process. We are implicitly + assuming by this approach that the autocorrelations that + arise in reported returns are entirely due to the + smoothing behavior funds engage in when reporting + results. In fact, the method may be adopted to produce + any desired level of autocorrelation at any lag and is + not limited to simply eliminating all autocorrelations. +} +\examples{ +data(managers) +head(Return.Okunev(managers[,1:3]),n=3) +} +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan +} +\references{ + Okunev, John and White, Derek R., \emph{ Hedge Fund Risk + Factors and Value at Risk of Credit Trading Strategies} + (October 2003). Available at SSRN: + \url{http://ssrn.com/abstract=460641} +} +\seealso{ + \code{\link{Return.Geltner}} \cr +} +\keyword{distribution} +\keyword{models} +\keyword{multivariate} +\keyword{ts} + Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/se.LoSharpe.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/se.LoSharpe.Rd 2013-08-31 14:03:26 UTC (rev 2951) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/se.LoSharpe.Rd 2013-08-31 16:30:59 UTC (rev 2952) @@ -48,7 +48,7 @@ } \examples{ data(edhec) -head(se.LoSharpe(edhec,0,3) +se.LoSharpe(edhec,0,3) } \author{ Brian G. Peterson, Peter Carl, Shubhankit Mohan Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EMaxDDGBM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EMaxDDGBM.Rd (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EMaxDDGBM.Rd 2013-08-31 16:30:59 UTC (rev 2952) @@ -0,0 +1,57 @@ +\name{table.EMaxDDGBM} +\alias{table.EMaxDDGBM} +\title{Expected Drawdown using Brownian Motion Assumptions} +\usage{ + table.EMaxDDGBM(R, digits = 4) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of asset returns} + + \item{digits}{significant number} +} +\description{ + Works on the model specified by Maddon-Ismail which + investigates the behavior of this statistic for a + Brownian motion with drift. +} +\details{ + If X(t) is a random process on [0, T ], the maximum + drawdown at time T , D(T), is defined by where \deqn{D(T) + = sup [X(s) - X(t)]} where s belongs to [0,t] and s + belongs to [0,T] Informally, this is the largest drop + from a peak to a bottom. In this paper, we investigate + the behavior of this statistic for a Brownian motion with + drift. In particular, we give an infinite series + representation of its distribution, and consider its + expected value. When the drift is zero, we give an + analytic expression for the expected value, and for + non-zero drift, we give an infinite series + representation. For all cases, we compute the limiting + \bold{(\eqn{T tends to \infty})} behavior, which can be + logarithmic (\eqn{\mu} > 0), square root (\eqn{\mu} = 0), + or linear (\eqn{\mu} < 0). +} +\examples{ +library(PerformanceAnalytics) +data(edhec) +table.EMaxDDGBM(edhec) +} +\author{ + Shubhankit Mohan +} +\references{ + Magdon-Ismail, M., Atiya, A., Pratap, A., and Yaser S. + Abu-Mostafa: On the Maximum Drawdown of a Browninan + Motion, Journal of Applied Probability 41, pp. 147-161, + 2004 + \url{http://alumnus.caltech.edu/~amir/drawdown-jrnl.pdf} +} +\keyword{Assumptions} +\keyword{Brownian} +\keyword{Drawdown} +\keyword{Expected} +\keyword{models} +\keyword{Motion} +\keyword{Using} + From noreply at r-forge.r-project.org Sat Aug 31 20:19:26 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 20:19:26 +0200 (CEST) Subject: [Returnanalytics-commits] r2953 - pkg/PortfolioAnalytics/R Message-ID: <20130831181926.2D3DF1851EE@r-forge.r-project.org> Author: rossbennett34 Date: 2013-08-31 20:19:25 +0200 (Sat, 31 Aug 2013) New Revision: 2953 Modified: pkg/PortfolioAnalytics/R/optFUN.R pkg/PortfolioAnalytics/R/optimize.portfolio.R Log: Fixing case with optimize.portfolio where the moment function was failing if no dot args were passed in. Removing print statements from gmv_opt. Modified: pkg/PortfolioAnalytics/R/optFUN.R =================================================================== --- pkg/PortfolioAnalytics/R/optFUN.R 2013-08-31 16:30:59 UTC (rev 2952) +++ pkg/PortfolioAnalytics/R/optFUN.R 2013-08-31 18:19:25 UTC (rev 2953) @@ -58,8 +58,6 @@ rhs.vec <- c(rhs.vec, constraints$lower, -constraints$upper) } - print(constraints$conc_aversion) - print(lambda_hhi) # set up the quadratic objective if(!is.null(constraints$conc_aversion)){ ROI_objective <- Q_objective(Q=2*lambda*moments$var + lambda_hhi * diag(N), L=-moments$mean) Modified: pkg/PortfolioAnalytics/R/optimize.portfolio.R =================================================================== --- pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-31 16:30:59 UTC (rev 2952) +++ pkg/PortfolioAnalytics/R/optimize.portfolio.R 2013-08-31 18:19:25 UTC (rev 2953) @@ -515,7 +515,10 @@ .formals$R <- R .formals$portfolio <- portfolio .formals$... <- NULL - + + # If no dotargs are passed in, .formals was a pairlist and do.call was failing + if(!inherits(.formals, "list")) .formals <- as.list(.formals) + mout <- try((do.call(momentFUN, .formals)) ,silent=TRUE) if(inherits(mout,"try-error")) { message(paste("portfolio moment function failed with message",mout)) From noreply at r-forge.r-project.org Sat Aug 31 20:39:23 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 20:39:23 +0200 (CEST) Subject: [Returnanalytics-commits] r2954 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: . man Message-ID: <20130831183923.8C03B1805AA@r-forge.r-project.org> Author: braverock Date: 2013-08-31 20:39:23 +0200 (Sat, 31 Aug 2013) New Revision: 2954 Removed: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/inst/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/QP.Norm.Rd pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EMaxDDGBM.Rd Log: - remove obsolete files Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/QP.Norm.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/QP.Norm.Rd 2013-08-31 18:19:25 UTC (rev 2953) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/QP.Norm.Rd 2013-08-31 18:39:23 UTC (rev 2954) @@ -1,22 +0,0 @@ -\name{QP.Norm} -\alias{QP.Norm} -\title{QP function for calculation of Sharpe Ratio} -\usage{ - QP.Norm(R, tau, scale = NA) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{tau}{Time Scale Translations Factor} - - \item{scale}{number of periods in a year (daily scale = - 252, monthly scale =} -} -\description{ - QP function for calculation of Sharpe Ratio -} -\seealso{ - \code{\link{CalmarRatio.Norm}}, \cr -} - Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EMaxDDGBM.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EMaxDDGBM.Rd 2013-08-31 18:19:25 UTC (rev 2953) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/table.EMaxDDGBM.Rd 2013-08-31 18:39:23 UTC (rev 2954) @@ -1,57 +0,0 @@ -\name{table.EMaxDDGBM} -\alias{table.EMaxDDGBM} -\title{Expected Drawdown using Brownian Motion Assumptions} -\usage{ - table.EMaxDDGBM(R, digits = 4) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of asset returns} - - \item{digits}{significant number} -} -\description{ - Works on the model specified by Maddon-Ismail which - investigates the behavior of this statistic for a - Brownian motion with drift. -} -\details{ - If X(t) is a random process on [0, T ], the maximum - drawdown at time T , D(T), is defined by where \deqn{D(T) - = sup [X(s) - X(t)]} where s belongs to [0,t] and s - belongs to [0,T] Informally, this is the largest drop - from a peak to a bottom. In this paper, we investigate - the behavior of this statistic for a Brownian motion with - drift. In particular, we give an infinite series - representation of its distribution, and consider its - expected value. When the drift is zero, we give an - analytic expression for the expected value, and for - non-zero drift, we give an infinite series - representation. For all cases, we compute the limiting - \bold{(\eqn{T tends to \infty})} behavior, which can be - logarithmic (\eqn{\mu} > 0), square root (\eqn{\mu} = 0), - or linear (\eqn{\mu} < 0). -} -\examples{ -library(PerformanceAnalytics) -data(edhec) -table.EMaxDDGBM(edhec) -} -\author{ - Shubhankit Mohan -} -\references{ - Magdon-Ismail, M., Atiya, A., Pratap, A., and Yaser S. - Abu-Mostafa: On the Maximum Drawdown of a Browninan - Motion, Journal of Applied Probability 41, pp. 147-161, - 2004 - \url{http://alumnus.caltech.edu/~amir/drawdown-jrnl.pdf} -} -\keyword{Assumptions} -\keyword{Brownian} -\keyword{Drawdown} -\keyword{Expected} -\keyword{models} -\keyword{Motion} -\keyword{Using} - From noreply at r-forge.r-project.org Sat Aug 31 22:52:41 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 22:52:41 +0200 (CEST) Subject: [Returnanalytics-commits] r2955 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: . vignettes Message-ID: <20130831205241.DE2D11854E9@r-forge.r-project.org> Author: shubhanm Date: 2013-08-31 22:52:41 +0200 (Sat, 31 Aug 2013) New Revision: 2955 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ACFSTDEV.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ACFSTDEV.rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/LoSharpe.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/LoSharpe.pdf Log: ./ Addition of clean build vignettes Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ACFSTDEV.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ACFSTDEV.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ACFSTDEV.rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ACFSTDEV.rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ACFSTDEV.rnw 2013-08-31 20:52:41 UTC (rev 2955) @@ -0,0 +1,90 @@ +%% no need for \DeclareGraphicsExtensions{.pdf,.eps} + +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +%\usepackage{noweb} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps=FALSE} +%\VignetteIndexEntry{Performance Attribution from Bacon} +%\VignetteDepends{PerformanceAnalytics} +%\VignetteKeywords{returns, performance, risk, benchmark, portfolio} +%\VignettePackage{PerformanceAnalytics} + +%\documentclass[a4paper]{article} +%\usepackage[noae]{Sweave} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry} +%\usepackage{graphicx} +%\usepackage{graphicx, verbatim} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage{graphicx} + +\title{Autocorrelated Standard Deviation} +\author{R Project for Statistical Computing} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\maketitle + + +\begin{abstract} +The fact that many hedge fund returns exhibit extraordinary levels of serial correlation is now well-known and generally accepted as fact.Because hedge fund strategies have exceptionally high autocorrelations in reported returns and this is taken as evidence of return smoothing, we highlight the effect autocorrelation has on volatility which is hazed by the square root of time rule used in the industry +\end{abstract} + +<>= +library(PerformanceAnalytics) +data(edhec) +@ + +<>= +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/ACStdDev.annualized.R') +@ + +\section{Methodology} +Given a sample of historical returns \((R_1,R_2, . . .,R_T)\),the method assumes the fund manager smooths returns in the following manner, when 't' is the unit time interval: + +%Let $X \sim N(0,1)$ and $Y \sim \textrm{Exponential}(\mu)$. Let +%$Z = \sin(X)$. $\sqrt{X}$. + +%$\hat{\mu}$ = $\displaystyle\frac{22}{7}$ +%e^{2 \mu} = 1 +%\begin{equation} +%\left(\sum_{t=1}^{T} R_t/T\right) = \hat{\mu} \\ +%\end{equation} +\begin{equation} + \sigma_{T} = T \sqrt{\sigma_{t}} \\ +\end{equation} + + +\section{Usage} + +In this example we use edhec database, to compute true Hedge Fund Returns. + +<>= +library(PerformanceAnalytics) +data(edhec) +ACFVol = ACStdDev.annualized(edhec[,1:3]) +Vol = StdDev.annualized(edhec[,1:3]) +Vol +ACFVol +barplot(rbind(ACFVol,Vol), main="ACF and Orignal Volatility", + xlab="Fund Type",ylab="Volatilty (in %)", col=c("darkblue","red"), beside=TRUE) + legend("topright", c("1","2"), cex=0.6, + bty="2", fill=c("darkblue","red")); +@ + +The above figure shows the behaviour of the distribution tending to a normal IID distribution.For comparitive purpose, one can observe the change in the charateristics of return as compared to the orignal. + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/LoSharpe.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/LoSharpe.Rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/LoSharpe.Rnw 2013-08-31 20:52:41 UTC (rev 2955) @@ -0,0 +1,76 @@ +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps = FALSE} +\begin{document} +\SweaveOpts{concordance=TRUE} + +\title{ Lo Sharpe Ratio } +\author{R Project for Statistical Computing} +% \keywords{Lo Sharpe Ratio,GLM Smooth Index,GLM Return Table} + +\makeatletter +\makeatother +\maketitle + +\begin{abstract} + + This vignette gives an overview of the Lo Sharpe Ratio which have addressed the issue of IID in the financial time series data. +\end{abstract} + + +<>= +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/LoSharpe.R') +@ + +<>= +library(PerformanceAnalytics) +data(edhec) +@ +\section{Background} +The building blocks of the \textbf{Sharpe Ratio} : expected returns and volatilities are unknown quantities that must be estimated statistically and are, +therefore, subject to \emph{estimation error} . This raises the natural question: How +\emph{accurately} are Sharpe ratios measured? To address this question, Andrew Lo derives explicit expressions for the statistical distribution of the Sharpe ratio using +standard asymptotic theory. + + +\section{Lo Sharpe Ratio} + Given a predefined benchmark Sharpe ratio $SR^\ast$ , the observed Sharpe ratio $\hat{SR}$ can be expressed in terms of autocorrelated coefficients as + + \deqn{ \hat{SR} (q) - SR(q)= Normal Distribution(0,V_{GMM}(q)) } + +The estimator for the Sharpe ratio then follows directly: +\deqn{ \hat{SR} (q) = \hat{ \eta } (q) * Sharpe Ratio} +\deqn{ \hat{ \eta } (q)= q/\sqrt{q + \sum_k^n \rho } } +\section{Example} + +In an illustrative +empirical example of mutual funds and hedge funds, we find results, similar reported in paper, that the annual Sharpe ratio for a hedge fund can be overstated by as much as \textbf{65} \% because of the presence of \textbf{serial correlation} , and once +this serial correlation is properly taken into account, the rankings of hedge +funds based on \emph{Sharpe ratios} can change dramatically. + +<>= +data(edhec) +charts.PerformanceSummary(edhec[,2:4], +colorset = rich6equal, lwd = 2, ylog = TRUE) +@ + +We can observe that the fund "\textbf{Emerging Markets}", which has the largest drawdown and serial autocorrelation, has it's Andrew Lo Sharpe ratio , \emph{decrease} most significantly as comapared to other funds. +<>= +Lo.Sharpe = LoSharpe(edhec[,2:4]) +Theoretical.Sharpe= SharpeRatio.annualized(edhec[,2:4]) +barplot(rbind(Theoretical.Sharpe,Lo.Sharpe), main="Theoretical and Andrew Lo Sharpe Ratio Observed", + xlab="Fund Type",ylab="Value", col=c("darkblue","red"), beside=TRUE) + legend("topright", c("1","2"), cex=0.6, + bty="2", fill=c("darkblue","red")); +@ + + +\end{document} Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/LoSharpe.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/LoSharpe.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream From noreply at r-forge.r-project.org Sat Aug 31 23:27:28 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 23:27:28 +0200 (CEST) Subject: [Returnanalytics-commits] r2956 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: R man vignettes Message-ID: <20130831212728.1B155185C4B@r-forge.r-project.org> Author: shubhanm Date: 2013-08-31 23:27:27 +0200 (Sat, 31 Aug 2013) New Revision: 2956 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.pdf Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd Log: ./ Clean vignettes Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-31 20:52:41 UTC (rev 2955) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-31 21:27:27 UTC (rev 2956) @@ -18,7 +18,7 @@ #' @keywords Maximum Loss Simulated Drawdown #' @examples #' library(PerformanceAnalytics) -#' AcarSim() +#' AcarSim(R) #' @rdname AcarSim #' @export AcarSim <- @@ -28,7 +28,7 @@ data(edhec) - R = checkData(edhec, method="xts") + R = checkData(R, method="xts") # Get dimensions and labels # simulated parameters using edhec data mu=mean(Return.annualized(R)) @@ -40,7 +40,7 @@ T= 36 j=1 dt=1/T -nsim=1; +nsim=30; thres=4; r=matrix(0,nsim,T+1) monthly = 0 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R 2013-08-31 20:52:41 UTC (rev 2955) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R 2013-08-31 21:27:27 UTC (rev 2956) @@ -39,7 +39,7 @@ T= 36 j=1 dt=1/T - nsim=6000; + nsim=6; thres=4; r=matrix(0,nsim,T+1) monthly = 0 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd 2013-08-31 20:52:41 UTC (rev 2955) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd 2013-08-31 21:27:27 UTC (rev 2956) @@ -31,7 +31,7 @@ } \examples{ library(PerformanceAnalytics) -AcarSim() +AcarSim(R) } \author{ Shubhankit Mohan Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.Rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.Rnw 2013-08-31 21:27:27 UTC (rev 2956) @@ -0,0 +1,85 @@ +%% no need for \DeclareGraphicsExtensions{.pdf,.eps} + +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +%\usepackage{noweb} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps=FALSE} +%\VignetteIndexEntry{Performance Attribution from Bacon} +%\VignetteDepends{PerformanceAnalytics} +%\VignetteKeywords{returns, performance, risk, benchmark, portfolio} +%\VignettePackage{PerformanceAnalytics} + +%\documentclass[a4paper]{article} +%\usepackage[noae]{Sweave} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry} +%\usepackage{graphicx} +%\usepackage{graphicx, verbatim} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage{graphicx} + +\title{Chekhlov Conditional Drawdown at Risk} +\author{R Project for Statistical Computing} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\maketitle + + +\begin{abstract} +A new one-parameter family of risk measures called Conditional Drawdown (CDD) has +been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of $\hat{\alpha}$ the tolerance parameter, in the case of a single sample path, drawdown functional is defined as the mean of the worst (1 \(-\) $\hat{\alpha}$)100\% drawdowns. The CDD measure generalizes the notion of the drawdown functional to a multi-scenario case and can be considered as a generalization of deviation measure to a dynamic case. The CDD measure includes the Maximal Drawdown and Average Drawdown as its limiting cases. +\end{abstract} + +<>= +library(PerformanceAnalytics) +data(edhec) +@ + +<>= +source("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/CDrawdown.R") +@ + +\section{Background} + +The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD +The model develops concept of drawdown measure by generalizing the notion +of the CDD to the case of several sample paths for portfolio uncompounded rate +of return. + + +\section{Methodology} +For a given value of sequence ${\xi_k}$ where ${\xi}$ is a time series unit drawdown vector.The CV at R is formally defined as : +\begin{equation} +CV at R_{\alpha}(\xi)=\frac{\pi_{\xi}(\zeta(\alpha))-\alpha}{1-\alpha}\zeta(\alpha) + \frac{ \sum_{\xi_k=1}^{} \xi_k}{(1-\alpha)N} +\end{equation} + +Note that the first term in the right-hand side of equation appears because of inequality greater than equal to $\hat{\alpha}$. If (1 \(-\) $\hat{\alpha}$) \* 100\% of the worst drawdowns can be counted precisely, then and the first term in the right-hand side of equation disappears. Equation follows from the framework of the CVaR methodology + + +\section{Usage} + +In this example we use edhec database, to compute true Hedge Fund Returns. + +<<>>= +library(PerformanceAnalytics) +data(edhec) +CDrawdown(edhec) +@ + + + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.Rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.Rnw 2013-08-31 21:27:27 UTC (rev 2956) @@ -0,0 +1,135 @@ +%% no need for \DeclareGraphicsExtensions{.pdf,.eps} + +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +%\usepackage{noweb} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps=FALSE} +%\VignetteIndexEntry{Performance Attribution from Bacon} +%\VignetteDepends{PerformanceAnalytics} +%\VignetteKeywords{returns, performance, risk, benchmark, portfolio} +%\VignettePackage{PerformanceAnalytics} + +%\documentclass[a4paper]{article} +%\usepackage[noae]{Sweave} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry} +%\usepackage{graphicx} +%\usepackage{graphicx, verbatim} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage{graphicx} + +\title{Gemantsky Lo Makarov Return Model} +\author{R Project for Statistical Computing} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\maketitle + + +\begin{abstract} +The returns to hedge funds and other alternative investments are often highly serially correlated. In this paper, we explore several sources of such serial correlation and show that the most likely explanation is illiquidity exposure and smoothed returns. We propose an econometric model of return smoothingand develop estimators for the smoothing profile as well as a smoothing-adjusted obtained Sharpe ratio.\end{abstract} + +<>= +library(PerformanceAnalytics) +data(edhec) +@ + +<>= +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R') +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/na.skip.R') +@ + +\section{Methodology} +Given a sample of historical returns \((R_1,R_2, . . .,R_T)\),the method assumes the fund manager smooths returns in the following manner: + +To quantify the impact of all of these possible sources of serial correlation, denote by \(R_t\),the true economic return of a hedge fund in period t; and let \(R_t\) satisfy the following linear single-factor model: + +\begin{equation} + R_t = \\ {\mu} + {\beta}{{\delta}}_t+ \xi_t +\end{equation} + +Where $\xi_t, \sim N(0,1)$ +and Var[\(R_t\)] = $\sigma$\ \(^2\) + +True returns represent the flow of information that would determine the equilibrium value of the fund's securities in a frictionless market. However, true economic returns are not observed. Instead, \(R_t^0\) denotes the reported or observed return in period t; and let +%$Z = \sin(X)$. $\sqrt{X}$. + +%$\hat{\mu}$ = $\displaystyle\frac{22}{7}$ +%e^{2 \mu} = 1 +%\begin{equation} +%\left(\sum_{t=1}^{T} R_t/T\right) = \hat{\mu} \\ +%\end{equation} +\begin{equation} + R_t^0 = \theta _0R_{t} + \theta _1R_{t-1}+\theta _2R_{t-2} + \cdots + \theta _kR_{t-k}\\ +\end{equation} +\begin{equation} +\theta _j \epsilon [0,1] where : j = 0,1, \cdots , k \\ +\end{equation} + +and +%\left(\mu \right) = \sum_{t=1}^{T} \(Ri)/T\ \\ +\begin{equation} +\theta _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1 \\ +\end{equation} + +which is a weighted average of the fund's true returns over the most recent k + 1 +periods, including the current period. +\section{Smoothing Profile Estimates} + +Using the methods outlined above , the paper estimates the smoothing model +using maximumlikelihood procedure-programmed in Matlab using the Optimization Toolbox andreplicated in Stata usingits MA(k) estimation routine.Using Time seseries analysis and computational finance("tseries") library , we fit an it an ARMA model to a univariate time series by conditional least squares. For exact maximum likelihood estimation,arima0 from package stats can be used. + +\section{Usage} + +In this example we use edhec database, to compute true Hedge Fund Returns. + +<>= +library(PerformanceAnalytics) +data(edhec) +Returns = Return.GLM(edhec[,1]) +skewness(edhec[,1]) +skewness(Returns) +# Right Shift of Returns Ditribution for a negative skewed distribution +kurtosis(edhec[,1]) +kurtosis(Returns) +# Reduction in "peakedness" around the mean +layout(rbind(c(1, 2), c(3, 4))) + chart.Histogram(Returns, main = "Plain", methods = NULL) + chart.Histogram(Returns, main = "Density", breaks = 40, + methods = c("add.density", "add.normal")) + chart.Histogram(Returns, main = "Skew and Kurt", + methods = c("add.centered", "add.rug")) +chart.Histogram(Returns, main = "Risk Measures", + methods = c("add.risk")) +@ + +The above figure shows the behaviour of the distribution tending to a normal IID distribution.For comparitive purpose, one can observe the change in the charateristics of return as compared to the orignal. + +<>= +library(PerformanceAnalytics) +data(edhec) +Returns = Return.GLM(edhec[,1]) +layout(rbind(c(1, 2), c(3, 4))) + chart.Histogram(edhec[,1], main = "Plain", methods = NULL) + chart.Histogram(edhec[,1], main = "Density", breaks = 40, + methods = c("add.density", "add.normal")) + chart.Histogram(edhec[,1], main = "Skew and Kurt", + methods = c("add.centered", "add.rug")) +chart.Histogram(edhec[,1], main = "Risk Measures", + methods = c("add.risk")) +@ + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.Rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.Rnw 2013-08-31 21:27:27 UTC (rev 2956) @@ -0,0 +1,107 @@ +%% no need for \DeclareGraphicsExtensions{.pdf,.eps} + +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +%\usepackage{noweb} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps=FALSE} +%\VignetteIndexEntry{Performance Attribution from Bacon} +%\VignetteDepends{PerformanceAnalytics} +%\VignetteKeywords{returns, performance, risk, benchmark, portfolio} +%\VignettePackage{PerformanceAnalytics} + +%\documentclass[a4paper]{article} +%\usepackage[noae]{Sweave} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry} +%\usepackage{graphicx} +%\usepackage{graphicx, verbatim} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage{graphicx} + +\title{GLM Smoothing Index} +\author{R Project for Statistical Computing} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\maketitle + + +\begin{abstract} +The returns to hedge funds and other alternative investments are often highly serially correlated.Gemanstsy,Lo and Markov propose an econometric model of return smoothingand develop estimators for the smoothing profile.The magnitude of impact is measured by the smoothing index, which is a measure of concentration of weight in lagged terms. +\end{abstract} + +<>= +library(PerformanceAnalytics) +data(edhec) +@ + +<>= +source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R') +@ + +\section{Background} +To quantify the impact of all of these possible sources of serial correlation, denote by \(R_t\),the true economic return of a hedge fund in period t; and let \(R_t\) satisfy the following linear single factor model: + +\begin{equation} + R_t = \\ {\mu} + {\beta}{{\delta}}_t+ \xi_t +\end{equation} + +Where $\xi_t, \sim N(0,1)$ +and Var[\(R_t\)] = $\sigma$\ \(^2\) + +True returns represent the flow of information that would determine the equilibrium value of the fund's securities in a frictionless market. However, true economic returns are not observed. Instead, \(R_t^0\) denotes the reported or observed return in period t; and let +%$Z = \sin(X)$. $\sqrt{X}$. + +%$\hat{\mu}$ = $\displaystyle\frac{22}{7}$ +%e^{2 \mu} = 1 +%\begin{equation} +%\left(\sum_{t=1}^{T} R_t/T\right) = \hat{\mu} \\ +%\end{equation} +\begin{equation} + R_t^0 = \theta _0R_{t} + \theta _1R_{t-1}+\theta _2R_{t-2} + \cdots + \theta _kR_{t-k}\\ +\end{equation} +\begin{equation} +\theta _j \epsilon [0,1] where : j = 0,1, \cdots , k \\ +\end{equation} + +and +%\left(\mu \right) = \sum_{t=1}^{T} \(Ri)/T\ \\ +\begin{equation} +\theta _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1 \\ +\end{equation} + +which is a weighted average of the fund's true returns over the most recent k + 1 +periods, including the current period. + +\section{Smoothing Index} +A useful summary statistic for measuringthe concentration of weights is : +\begin{equation} +\xi = \sum_{j=0}^{k} \theta _j^2 \\ +\end{equation} + +This measure is well known in the industrial organization literature as the Herfindahl index, a measure of the concentration of firms in a given industry where $\theta$\(_j\) represents the market share of firm j. Becaus $\xi_t$\ is confined to the unit interval, and is minimized when all the $\theta$\(_j\) 's are identical, which implies a value of 1/k+1 for $\xi_i$\ ; and is maximized when one coefficient is 1 and the rest are 0. In the context of smoothed returns, a lower value of implies more smoothing, and the upper bound of 1 implies no smoothing, hence we shall refer to $\theta$\(_j\) as a ''\textbf{smoothingindex}''. + +\section{Usage} + +In this example we use edhec database, to compute Smoothing Index for Hedge Fund Returns. +<<>>= +library(PerformanceAnalytics) +data(edhec) +GLMSmoothIndex(edhec) +@ + + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.Rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.Rnw 2013-08-31 21:27:27 UTC (rev 2956) @@ -0,0 +1,87 @@ +%% no need for \DeclareGraphicsExtensions{.pdf,.eps} + +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +%\usepackage{noweb} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps=FALSE} +%\VignetteIndexEntry{Performance Attribution from Bacon} +%\VignetteDepends{PerformanceAnalytics} +%\VignetteKeywords{returns, performance, risk, benchmark, portfolio} +%\VignettePackage{PerformanceAnalytics} + +%\documentclass[a4paper]{article} +%\usepackage[noae]{Sweave} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry} +%\usepackage{graphicx} +%\usepackage{graphicx, verbatim} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage{graphicx} + +\title{Maximum Loss and Maximum Drawdown in Financial Markets} +\author{R Project for Statistical Computing} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\maketitle + + +\begin{abstract} +The main concern of this paper is the study of alternative risk measures: namely maximum loss and +maximum drawdown. Both statistics have received little attention from academics despite their extensive use by proprietary traders and derivative fund managers. +Firstly, this paper recalls from previously published research the expected maximum loss under the normal random walk with drift assumption. In that case, we see that exact analytical formulae can be established. The expected maximum loss can be derived as a function of the mean and standard deviation of the asset. For the maximum drawdown, exact formulae seems more difficult to establish. +Therefore Monte-Carlo simulations have to be used. +\end{abstract} + +<>= +library(PerformanceAnalytics) +data(edhec) +@ + +<>= +source("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R") +@ + +\section{Background} + +The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD +The model develops concept of drawdown measure by generalizing the notion +of the CDD to the case of several sample paths for portfolio uncompounded rate +of return. + + +\section{Maximum Drawdown} +Unfortunately, there is no analytical formulae to establish the maximum drawdown properties under the random walk assumption. We should note first that due to its definition, the maximum drawdown divided by volatility is an only function of the ratio mean divided by volatility. +\begin{equation} +MD / \sigma = Min \frac{ \sum_{j=1}^{t} X_{j}}{\sigma} = F(\frac{\mu}{\sigma}) \\ +\end{equation} + +Such a ratio is useful in that this is a complementary statistic to the return divided by volatility ratio. To get some insight on the relationships between maximum drawdown per unit of volatility and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. We have simulated cash flows over a period of 36 monthly returns and measured maximum drawdown for varied levels of annualised return divided by volatility varying from minus two to two by step of 0.1. The process has been repeated six thousand times. + +\section{Usage} +Figure below illustrates the average maximum drawdown as well as the quantiles 85\%, 90\%, 95\%, 99\%.For instance, an investment exhibiting an annualised return/volatility equal to -2 should experience on average a maximum drawdown equal to six times the annualised volatility. + +Other observations are that:maximum drawdown is a positive function of the return /volatility ratio ,confidence interval widens as the return/volatility ratio decreases.This means that as the return/volatility increases not only the magnitude of drawdown decreases but the confidence interval as well. In others words losses are both smaller and more predictable. + +<>= +library(PerformanceAnalytics) +data(edhec) +AcarSim(edhec) +@ + + + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream From noreply at r-forge.r-project.org Sat Aug 31 23:38:47 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 23:38:47 +0200 (CEST) Subject: [Returnanalytics-commits] r2957 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: R vignettes Message-ID: <20130831213847.6E52F185DB9@r-forge.r-project.org> Author: shubhanm Date: 2013-08-31 23:38:47 +0200 (Sat, 31 Aug 2013) New Revision: 2957 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/NormCalmar.pdf pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/NormCalmar.rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/OkunevWhite.Rnw pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/OkunevWhite.pdf Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R Log: ./ Further Addition of clean build vignettes Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-31 21:27:27 UTC (rev 2956) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-31 21:38:47 UTC (rev 2957) @@ -40,7 +40,7 @@ T= 36 j=1 dt=1/T -nsim=30; +nsim=3; thres=4; r=matrix(0,nsim,T+1) monthly = 0 Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/NormCalmar.pdf =================================================================== (Binary files differ) Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/NormCalmar.pdf ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/NormCalmar.rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/NormCalmar.rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/NormCalmar.rnw 2013-08-31 21:38:47 UTC (rev 2957) @@ -0,0 +1,111 @@ +%% no need for \DeclareGraphicsExtensions{.pdf,.eps} + +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +%\usepackage{noweb} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps=FALSE} +%\VignetteIndexEntry{Performance Attribution from Bacon} +%\VignetteDepends{PerformanceAnalytics} +%\VignetteKeywords{returns, performance, risk, benchmark, portfolio} +%\VignettePackage{PerformanceAnalytics} + +%\documentclass[a4paper]{article} +%\usepackage[noae]{Sweave} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry} +%\usepackage{graphicx} +%\usepackage{graphicx, verbatim} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage{graphicx} + +\title{Normalized Calmar and Sterling Ratio} +\author{R Project for Statistical Computing} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\maketitle + + +\begin{abstract} + Both the Calmar and the Sterling ratio are the ratio of annualized returnmover the absolute value of the maximum drawdown of an investment. The Sterling ratio adds an excess risk measure to the maximum drawdown, traditionally and defaulting to 10\%.It is also traditional to use a three year return series for these + calculations, although the functions included here make no effort to + determine the length of your series. However, Malik Magdon-Ismail devised a scaling law in which can be used to compare Calmar/Sterling ratio's with different +$\mu$ ,$\sigma$ and T. +\end{abstract} + +<>= +library(PerformanceAnalytics) +data(edhec) +@ + +<>= +source("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/CalmarRatio.Norm.R") +source("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/SterlingRatio.Norm.R") +source("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/QP.Norm.R") +@ + +\section{Background} +Given a sample of historical returns \((R_1,R_2, . . .,R_T)\),the Calmar and Sterling Ratio's are defined as : + +%Let $X \sim N(0,1)$ and $Y \sim \textrm{Exponential}(\mu)$. Let +%$Z = \sin(X)$. $\sqrt{X}$. + +%$\hat{\mu}$ = $\displaystyle\frac{22}{7}$ +%e^{2 \mu} = 1 +%\begin{equation} +%\left(\sum_{t=1}^{T} R_t/T\right) = \hat{\mu} \\ +%\end{equation} +\begin{equation} + Calmar Ratio = \frac{Return [0,T]}{max Drawdown [0,T]} \\ +\end{equation} + +\begin{equation} + Sterling Ratio = \frac{Return [0,T]}{max Drawdown [0,T] - 10\%} \\ +\end{equation} + +\section{Scaling Law} +Malik Magdon-Ismail impmemented a sclaing law for different $\mu$ ,$\sigma$ and T.Defined as : + + +\begin{equation} +Calmar_{\tau} = \gamma(_{\tau , Sharpe_{1}})Calmar_{T_{1}} \\ +\end{equation} + +Where : + \begin{equation} +\gamma(_{\tau , Sharpe_{1}}) = \frac{\frac{Q_p(T_1/2Sharpe^2_{1})}{T_1}}{\frac{Q_p(T_2/2Sharpe^2_{1})}{\tau}} \\ +\end{equation} + + And , when T tends to Infinity +\begin{equation} +Q_p(T/2Sharpe^2) = .63519 + log (Sharpe) + 0.5 log T\\ +\end{equation} + +Same methodolgy goes to Sterling Ratio. +\section{Usage} + +In this example we use edhec database, to compute Calmar and Sterling Ratio. + +<>= +library(PerformanceAnalytics) +data(edhec) +CalmarRatio.Norm(edhec,1) +SterlingRatio.Norm(edhec,1) +@ + +We can see as we shrunk the period the Ratio's decrease because the Max Drawdown does not change much over reduction of time period, but returns are approximately scaled according to the time length. + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/OkunevWhite.Rnw =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/OkunevWhite.Rnw (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/OkunevWhite.Rnw 2013-08-31 21:38:47 UTC (rev 2957) @@ -0,0 +1,135 @@ +%% no need for \DeclareGraphicsExtensions{.pdf,.eps} + +\documentclass[12pt,letterpaper,english]{article} +\usepackage{times} +\usepackage[T1]{fontenc} +\IfFileExists{url.sty}{\usepackage{url}} + {\newcommand{\url}{\texttt}} + +\usepackage{babel} +%\usepackage{noweb} +\usepackage{Rd} + +\usepackage{Sweave} +\SweaveOpts{engine=R,eps=FALSE} +%\VignetteIndexEntry{Performance Attribution from Bacon} +%\VignetteDepends{PerformanceAnalytics} +%\VignetteKeywords{returns, performance, risk, benchmark, portfolio} +%\VignettePackage{PerformanceAnalytics} + +%\documentclass[a4paper]{article} +%\usepackage[noae]{Sweave} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry} +%\usepackage{graphicx} +%\usepackage{graphicx, verbatim} +%\usepackage{ucs} +%\usepackage[utf8x]{inputenc} +%\usepackage{amsmath, amsthm, latexsym} +%\usepackage{graphicx} + +\title{Okunev White Return Model} +\author{R Project for Statistical Computing} + +\begin{document} +\SweaveOpts{concordance=TRUE} + +\maketitle + + +\begin{abstract} +The fact that many hedge fund returns exhibit extraordinary levels of serial correlation is now well-known and generally accepted as fact.Because hedge fund strategies have exceptionally high autocorrelations in reported returns and this is taken as evidence of return smoothing, we first develop a method to completely eliminate any order of serial correlation across a wide array of time series processes.Once this is complete, we can determine the underlying risk factors to the "true" hedge fund returns and examine the incremental benefit attained from using nonlinear payoffs relative to the more traditional linear factors. +\end{abstract} + +<>= +library(PerformanceAnalytics) +data(edhec) +@ + +<>= +source("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.Okunev.R") +@ + +\section{Methodology} +Given a sample of historical returns \((R_1,R_2, . . .,R_T)\),the method assumes the fund manager smooths returns in the following manner: + +%Let $X \sim N(0,1)$ and $Y \sim \textrm{Exponential}(\mu)$. Let +%$Z = \sin(X)$. $\sqrt{X}$. + +%$\hat{\mu}$ = $\displaystyle\frac{22}{7}$ +%e^{2 \mu} = 1 +%\begin{equation} +%\left(\sum_{t=1}^{T} R_t/T\right) = \hat{\mu} \\ +%\end{equation} +\begin{equation} + r_{0,t} = \sum_{i}^{} \beta_{i}r_{0,t-i} + (1- \alpha)r_{m,t} \\ +\end{equation} + + +\begin{equation} +where : \sum_{i}^{} \beta_{i} = (1- \alpha) \\ +\end{equation} + +\(r_{0,t}\) : is the observed (reported) return at time t (with 0 adjustments' to reported returns), \\ +\(r_{m,t}\) : is the true underlying (unreported) return at time t (determined by making m adjustments to reported returns). \\ + +The objective is to determine the true underlying return by removing the +autocorrelation structure in the original return series without making any assumptions regarding the actual time series properties of the underlying process. We are implicitly assuming by this approach that the autocorrelations that arise in reported returns are entirely due to the smoothing behavior funds engage in when reporting results. In fact, the method may be adopted to produce any desired level of autocorrelation at any lag and is not limited to simply eliminating all autocorrelations. + +\section{To Remove Up to m Orders of Autocorrelation} +To remove the first m orders of autocorrelation from a given return series we would proceed in a manner very similar to that detailed in \textbf{Geltner Return}. We would initially remove the first order autocorrelation, then proceed to eliminate the second order autocorrelation through the iteration process. In general, to remove any order, m, autocorrelations from a given return series we would make the following transformation to returns: + +\begin{equation} +r_{m,t}=\frac{r_{m-1,t}-c_{m}r_{m-1,t-m}}{1-c_{m}} +\end{equation} + +Where \(r_{m-1,t}\) is the series return with the first (m-1) order autocorrelation coefficient's removed.The general form for all the autocorrelations given by the process is : +\begin{equation} +a_{m,n}=\frac{a_{m-1,n}(1+c_{m}^2)-c_{m}(1+a_{m-1,2m})}{1+c_{m}^2 -2c_{m}a_{m-1,n}} +\end{equation} + +Once a solution is found for \(c_{m}\) to create \(r_{m,t}\) , one will need to iterate back to remove the first m ??? 1 autocorrelations again. One will then need to once again remove the mth autocorrelation using the adjustment in equation (3). It would continue the process until the first m autocorrelations are sufficiently close to zero. + +\section{Usage} + +In this example we use edhec database, to compute true Hedge Fund Returns. + +<>= +library(PerformanceAnalytics) +data(edhec) +Returns = Return.Okunev(edhec[,1]) +skewness(edhec[,1]) +skewness(Returns) +# Right Shift of Returns Ditribution for a negative skewed distribution +kurtosis(edhec[,1]) +kurtosis(Returns) +# Reduction in "peakedness" around the mean +layout(rbind(c(1, 2), c(3, 4))) + chart.Histogram(Returns, main = "Plain", methods = NULL) + chart.Histogram(Returns, main = "Density", breaks = 40, + methods = c("add.density", "add.normal")) + chart.Histogram(Returns, main = "Skew and Kurt", + methods = c("add.centered", "add.rug")) +chart.Histogram(Returns, main = "Risk Measures", + methods = c("add.risk")) +@ + +The above figure shows the behaviour of the distribution tending to a normal IID distribution.For comparitive purpose, one can observe the change in the charateristics of return as compared to the orignal. +<>= +library(PerformanceAnalytics) +data(edhec) +Returns = Return.Okunev(edhec[,1]) +layout(rbind(c(1, 2), c(3, 4))) + chart.Histogram(edhec[,1], main = "Plain", methods = NULL) + chart.Histogram(edhec[,1], main = "Density", breaks = 40, + methods = c("add.density", "add.normal")) + chart.Histogram(edhec[,1], main = "Skew and Kurt", + methods = c("add.centered", "add.rug")) +chart.Histogram(edhec[,1], main = "Risk Measures", + methods = c("add.risk")) + +@ + +\end{document} \ No newline at end of file Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/OkunevWhite.pdf =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/OkunevWhite.pdf (rev 0) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/OkunevWhite.pdf 2013-08-31 21:38:47 UTC (rev 2957) @@ -0,0 +1,1441 @@ +%PDF-1.5 +%???? +1 0 obj << +/Length 191 +>> +stream +concordance:OkunevWhite.tex:OkunevWhite.Rnw:1 44 1 1 5 1 4 44 1 1 2 1 0 3 1 5 0 1 1 5 0 1 2 6 0 1 1 5 0 1 2 1 0 1 1 1 2 1 0 1 2 1 0 1 2 5 0 1 2 1 1 1 2 1 0 4 1 1 2 1 0 1 2 1 0 1 2 6 0 1 3 1 1 +endstream +endobj +4 0 obj << +/Length 2121 +/Filter /FlateDecode +>> +stream +x??X[o??~??B?K(4b?W???M?A??@ +????hK??iP?O???\??D?i? q??3E?]?????d?q???????+?s??E???j?q!?U?R??6d??a?>??????w=???????;?`??G??r9??g?{?-??uZ??u???2?C?J ??Y????V????t?_??0?o?&F?2E^?A????? ???;U?l?4?????Ln;??PP6&?.z?uyl;i?| +bX?e?q?i???m?}???8&??m?(2??m?8~h?T??H?%{? }??0!;{/?4Y?b???e?=o?????,?t=J?? +;???x at C[?e??c?????4?????e?????????=xdK????E2??-?-<]??Y?%k?d????",a,??yT?<-??\?c?vD???+?5??P?'????n?+??H? +qS1?*{_/?8?y7?v??? +???4?G?;??"?-%% ?Quk56h?Q?9i?bH,?a???hC??;E2U(?@DsM???/B???*??T??ZJ??4q?TYPS?????S7,0?y??*?? +??E??>??S??-?n???? +????N?C?Lp3??K??y?v9??/?&=?????U?A?z+ +?{?@V=_ ?H??V??@???#E&??;u?*??????a,?4] ??E_>??q??S?O?;?????/Q??? ?cn?8?}W????w????7??D?=????S?J;???Y??s???T??~2?uH??A??????B?{=??(?peR17]?z!?????}An?^z??k?O?????cm^Q"I???T??v" +?????0??? j?)x???%??[AH????C??'?W}0(?1?@?Z?P?8?????ZOw1#???=??xM} D?Q??1Js?'????????F_M????1?[Q???:???????^?^?=W4m??R[Nc?JX??A???I??L??J??6?$)?B/??i???}? XJQ???'?]??w???$??F??]??(?}6?V?:???grm$?? C?7?^ty?Wj????H?k??C???e`??F?I???c?9$5Sg??+?3?\?w?mz???????.t![?K?/?%y?i?p'??"??????D??b= +?W?d?w {?N???=q?7`?S?????6&)][??)[a7??C??'??K??????eZ??g?ZY?????????"' +endstream +endobj +20 0 obj << +/Length 2157 +/Filter /FlateDecode +>> +stream +x???????>_?[?? +K?H0??k?*N??&?l???A???#?KJ?8_??F?)?f?e'J ?h?????????LJQ[???m&?BhSfeU????z?????B??b???=???<xnKm??=?6??Z(?b??????o???\????i`?????oizx????'??s2???????-l?j??]?x?????6<1 ?m? ??R??Sz?p?w????3??????#????????#NUH0???5v^?4???+6??eM????:???????Iul0?Q?> +?c?n?0??$P????j????a?????g]?JL?? ??u{?8??C??u?D +??}?8????i??c\???? ??)?J??b??? ?I?? K?\????N? c???3??ONp???????'?i???T?????????????=Eq????W$? +?C???d8?]??m0?}J?j?4?;? `????M*A[?K???????"???*QW?D@ ???$EUk???1[d)*?Ep?$???T2s?p?Y?z(?"?JH???E????? +S&?2Bigfw????9???2G??l?d??- ???`???????? IM?12uL??1D?????~??}`J??-k?Zh???z???6??,t??{??E?*P?w????q???]G?Z??8'???3???r???2fN?*`?ePj???-?M5?K?. +??????d7??6??&E?.?d?>d.e??3s?&.5+D??>X?? ??????jfag(U?2&?Cur??z????&??R?)?V?+???R??9?!????`?3??????gR9??-?4a??S? ??\GIe???n? +????[?O????f?5Vt? 9K??C???+Y~????5???u)!?)??I>v??g?C?9???????SgK?5f?(???+??>?#???2??"?8p????????u????Z?yGT?!?H????? ???KFej?U?AI??,\???5%??????8a??fi???[%d9?xx???_?H?wQ??CKM??xe??z?&???5f?l?@W?}}????Y?? +j9???/K#? +'??? {oVQCHx?????XTH?@???{?m????`??BNV?????N?e?%?m???>?l???t*??2?WX???1?G.?C?K ?[?C?????????O??????;????? {?'??@~!???? $JN???J????3??P?O??????fd???Hp?OA?z????-K??;E??? +?a???17??r??$(?t?[cA:=\?k?D????*=|.{??W???_=???>\9???????????!}?h?!?Y~C???.S4#S.??c?~???=c.???e(??b?????.?izC??9??w???? ??>:?????@????q?|??t?/3Z: m???????D?V??t?????5?H?K?.??0?m2???B?_?Is? +endstream +endobj +26 0 obj << +/Length 567 +/Filter /FlateDecode +>> +stream +x??UKo?0 ??W???U?,;>l??+??0????!??6Hj??a???D?t????m??E??Q??lr|?PLJ?j?l??\E1???'z???]???:T? ???`tt +?8?9n????v? +7?D?@PT??4n?????????&V}???mWn)0?InE X?U?????O\sw??v=*?e?? 6??"????T??~M|Z???m?_]??$???J ??(&\??1?K?.U c?p#H??0?Rt??\$???`>?s2????u +???Q)?????l??0???woJ???????Y???/??^up}:?4????Mo?(?cu??g?|??? +?A?? +? +i?%??#?/??.??@?/??q}$?5?x9?uD?C????????????~6?!???{???7d?_ at fM?i?????:&}5?6?? +???w?O???????????T???z| +?3 C?U??\?O??2??????%E +endstream +endobj +29 0 obj << +/Length 755 +/Filter /FlateDecode +>> +stream +x??VKs?0??+??`c$?C;???i?n???l?#cR????wWZ???=???g?X????+f??c6?}/#;????n?????BiaL?#?R /???7?????90???X? ?? ?????7???n??If_;?????;?????????????G?s???? ?@J??? +???c?b?F<n??????;tPQ?M?? +V???????)'??C-?????rxh?\n? +??p???[ ?9???1?\t:?z?QKn:L? +g]?A?? +?4?@>?? ?/c +?S^}???D???0l?G??/zJ??v~?F??Q??????}?y???!???{?S%0??RO??-_?wQ????+???K_o?H?J? +????L??K??/`??D???$?-???7??? +endstream +endobj +23 0 obj << +/Type /XObject +/Subtype /Form +/FormType 1 +/PTEX.FileName (./OkunevWhite-Graph10.pdf) +/PTEX.PageNumber 1 +/PTEX.InfoDict 30 0 R +/BBox [0 0 432 432] +/Resources << +/ProcSet [ /PDF /Text ] +/Font << /F2 31 0 R/F3 32 0 R>> +/ExtGState << +>>/ColorSpace << +/sRGB 33 0 R +>>>> +/Length 8958 +/Filter /FlateDecode +>> +stream +x??]??%?q??W????????$??2 C??? ? A???!?CofFd?{Nw?v??????+*?|?<n????n|???????q?b?????w??????o????_???????N>n??????w??y???o?.v??g???#??8????y???????????v???_ ?S???%?~??W???/????Xn??????}???Z?~?KV??C?I?1???w??w????? e%w#????-?p?z??%;-'-????qb????]0??????'?r??w????q???p???????_?+??????\??{K?x/M??????1-??/??w??G}f?? ?m5???k?z/???????V????????uy??p??????z??4??7??Cu??m??1?jl?????77?un?9/?V????????Vh??????=?u\g?rW??-??????6O,??79?{????????????????nh6n???????????SB[O??;?vOe=?|h?h?{??4??! +}[?}???.?z?O;?dO???e????i??????#??z??G?j??{?G?|??4?? ????s?J?7?;?k/vn:?????.??FH????T??go????X?|,???E^_8Ka??>`????????,???B??b????a{>???eg??'??y?X????x????.=5?]?S??wc~?=5????2???'?6?T???{x????X?????yfl;??1???sn?Q?s?4 y??4?????????O[;????j?{?m????t??s?b~?m?????s[??8?????  ??^?|??Zv?gc?9??f??s??q?x6???o?,?is?????.;??#?????????>e([????{??nW??q?3??q?????????#?c?#_GP*???#??O?????????????D%?a???>;Q?????J??y??a?g"?I????????????st??#??#??>???????:?j?G??X???Yxn:*???????[?u;??????|????y?s???'.?x6??"??????D?#?? <.???I95??U?^T?A?ZY???j??? +W?vW?????Gw?3.Z???l?Z??p?A??]?????W??bIB +3???????8??[d5aQb'\7?4??p?????@?p~7???]?X??'?C +L??^??? +b????!_?Ma?# ????=???bX?8b????q???}?a9 ???o?WML????m??r?Cf?[????@?_?hC?}?|#??????DG???????ts?C????00?g?Hf???????????U???)??????????T?@?2/????????5?1\??t?R?6?????,?????M??>??*;?%G?b???? +Oe???? +?? ??%?{?V???????F?`??????tR?28{???|?f?????:Y{F\7?k? +?R?;??8>? +?h??Mv}??m??l?6;???z???0~F??x???-,P??a????i;~dP#??n}??^i?n???O4?????'5??B??{?x??T??????]?????????>????S?n?b????l?Fq?0c?K???IGa?kr????H6|?????????d?z????'?~??4#F?D? 7????????Xk????hE??5????????wR7?X?????K??'?????_??w????-??=???=?[?Qq???DZqv?????o?3? ?w???T???o????F?%????????s???k?5O?G?$W|?GGO??_4JK??o??'?^?O?u?????zW6z?T??8????.????~x??(+^?x?n&??8a?<+^??i?m??j ???????/?????\W\???y ?W???U6?d?K6?d?K6???W???_u??!????W??j??.?7????<??t??+_:??/|?M_??????/??a?k?????u[?? _m?????? _m?W?S?x?k???????K?=n?+^?u at K\????KD^????/ +???W?6}?q>??|m???????o??-yEX???U?R?????/?c????????B]??W??5p_???<$?K_?k?q9?r?|9f^}b? ???c??;_??r???c???X?????v??8??c?}io??:?e~Qq-???|??|???????????/?u??/??|??|??|1??????{]?1??8Aq???:1K?? ?S_???z!f?~b????eq?qNW0O>?ub??7??????'? ?C?pP_?y???????????90?$?k`????~?}??????}i??+_=?n4??)F??%??G?? ?????v?7w[%?|?D=$????R?:,?&`?k??v???O?6??+B_#??????.??\?????}??l|U???4lA?c`????&?K?!?:??-?X?nx8_&,??u????H?m?V1??T}???uK????d2?Czs_????V7=d+h???W??B ?ZW?:y6!*f???n?I?y!???? +???-w?mQ'?Y;???@?|??ub???E??m`KX? +?| +???|a??V?S ?????x????u?luE[?????:# +??K$??0???????"C_?;V?67?X?R7V???????/???g????X???i??^?V???*?W ?#[???6b?k`?7???p????????#?+?/???????m)?A?|)v??}?+???? 8?j?????Q???=??u?K?'???,?W?x?@_???Ru????y?+?|?W<??w1??z????????V6?>+_?_?g????/???M??X??y? ?*?OL???2????Y???? +??:V?????BX ??@c|?????2}]??Z?G??x?Z`??L?y*?'k|?y,Kl?,??3???????2qg|?????_????????yT??5c??????4???|?:???G??3??|^?b)???n/?G????|???'??5?f|??????9^T?L?????h??V+?????bQ??D???y?????6??mu#??G+?T?'?2????????>*??kG=??Q??e?N??y?v?? ??}i~P???A_???s?zm\???g?? +qF~?|R_????3?4???<~j???/???'?5?/??N??B?-?<%?s??0-?b???=??y?p??"??:?????0\? +??|???^?y?F}E_?Wt>??CB???r???/?O??"???[;?9???C??,?????S}?u_??x@;?\_???L?_'??B????T?????'=z???\???k??M??~?3??:0?? ?v?^?C??g? ??????41e??????W/??SMTYo??>?????????y?Y??? ???^??^??4? ????>???????/?kul?'wb?'7b?'c??X?????,?^Ov??d??????????X???7?Fp9>??k???7???#O??V?????????? +Y????o~?????q??=?c?????v?????m???????????X/???w?y_?d???c??c???>4aS?c????&l?l?O[? +?? ??????O?n???!E?lV?????a%?|???w?O?Tt??7?O????>?/=?-??F?Gz???.???sa?D?????N?aw?|?e?????b??c????y?????????6??/?{????n*y?8?V????kNv?t?Nv??N???g?/??.?????_?}?f???????N~?n???}?N???m???hk}%??l??p???CR?-?C?T??S;??b?0???H?}1???.Q}&p$1?>r?0 -??*?????2?S8?????p?S?????UV& ??X1?`?!?????]J??b???P??#P?[??9??K?|^1????0?T?UX-???SN + ??nQ??]????0 +????N+??@?b??Y???)?t?b??N??@!?Q??n%?q?$???T??rF???ZL9?-I +?2H>?)G??a??7P?-?????#????r??LMR???bin????[????b t&LH?#a??j cE??_.R?z?9???PNq6?%? ?51U?r +? ~?N +G?1DD??l?E<]??pn#9?k#9?sm?}???Frz?Nr?:?_ru??FBH???H?` ^X ??,*9?q? ? ??????k W?:?B?c?? Q??m?+?hU?Rq#?m?,WBV??X???!????c?M??phe.,"???B)'???V?? |l???=??????????D???k????????*??C?=?????5?????d?????????h;?O;?d???=r?v????c??%/v?b?????z??/;?b????,v??????k[?u??j??p???????#??N???M???e~??^??>6?kZ?u?6?^v??n??e\?a????b?i?gY?Y6???????X??x?}m_[???eZ?u{??????$???8g;?c]??nv??]?f?e????????9$O?"???f???%m?z,?zl??{[?o? p??GL??{N?]iLv???????????e???>??7??p????k?Ky?G??a?`0?S???????,??v?z??????k??y????>_?????o??????????.??-DZ?}????c???????=Y?9??~?????GV?d?O??E???~xd??????.????????????f????_????{?j_?}??????????z~[??b?=???????????M?9~ \?=.?w{Z?i??j_?c;?/;?j??????????p?/???????b/????o???f_??;???^}? ??/?b/?????j??^W{??s??{??m?o?????????g???#?j???o?y|$\?K??????_???d?'??K???~,?????k]?uo_[?????Oy?????^????=????n??.?}?*?}??????????????????6????o???K???????O??>?y???C(?]??-K??e??r]?w???????????`?????[??i???>????????8????.?9?+i??J^??d?K|Kx?e?d?$.??p????]?????p?? ?e???X?g=??Y??????t;?Yn??[?O??1?X??p????? +?[M?????????????????????/???? ?????q +??(?>??s????????{???[{???_?{bkO7???POl???? y?? ????+w?????&??-a~??????????O/L??c????d?????]???S???d?????M??S????#g?R?Ok?'^_i?w????C|????????6???g?????;????????m5???>k?)?v?Y[M????[M??dl&% ??@??v>A8????O?Nv]e_e?????u???O?`??? ??M>???> +stream +x???wTS????7?P????khRH +?H?.*1 J??"6DTpDQ??2(???C??"??Q??D?qp?Id???y?????~k????g?}??????LX ? ?X??????g` ?l?p??B?F?|??l???? ??*????????Y"1P??????\?8=W?%?O???4M?0J?"Y?2V?s?,[|??e9?2?<?s??e???'??9???`???2?&c?tI?@?o??|N6(??.?sSdl-c?(2?-?y?H?_??/X??????Z.$??&\S???????M????07?#?1??Y?rf??Yym?";?8980m-m?(?]????v?^??D???W~? +??e????mi]?P????`/???u}q?|^R??,g+???\K?k)/????C_|?R????ax??8?t1C^7nfz?D????p? ?????u?$??/?ED??L L??[???B?@???????????????X?!@~(* {d+??} ?G???????????}W?L??$?cGD2?Q????Z4 E@?@??????A(?q`1???D ??????`'?u?4?6pt?c?48.??`?R0??)? +?@???R?t C???X??CP?%CBH@??R?????f?[?(t? +C??Qh?z#0 ??Z?l?`O8?????28.????p|??O???X +????:??0?FB?x$ !???i@?????H???[EE1PL? ??????V?6??QP??>?U?(j +?MFk?????t,:??.FW???????8???c?1?L&?????9???a??X?:??? +?r?bl1? +{{{;?}?#?tp?8_\?8??"?Ey?.,?X?????%?%G??1?-??9????????K??l?.??oo???/?O$?&?'=JvM??x??????{????=Vs\?x? ????N???>?u?????c?Kz???=s?/?o?l????|??????y???? ??^d]???p?s?~???:;???/;]??7|?????W????p???????Q?o?H?!?????V????sn??Ys}?????????~4??]? =>?=:?`??;c??'?e??~??!?a???D?#?G?&}'/?^?x?I??????+?\????w?x?20;5?\?????_??????e?t???W?f^??Qs?-?m???w3????+??~???????O?~???? +endstream +endobj +39 0 obj << +/Length 134 +/Filter /FlateDecode +>> +stream +x?M???0 ?w??G???n??CH?!eCL ????JKPt??w>}c?? +E(?+? +3Y?c???x? +?{??1? /??????1?(v??6*?F??wh???a???5e??F?/?r +h?D0 +???? ?Om*\ +endstream +endobj +36 0 obj << +/Type /XObject +/Subtype /Form +/FormType 1 +/PTEX.FileName (./OkunevWhite-004.pdf) +/PTEX.PageNumber 1 +/PTEX.InfoDict 40 0 R +/BBox [0 0 432 432] +/Resources << +/ProcSet [ /PDF /Text ] +/Font << /F2 41 0 R/F3 42 0 R>> +/ExtGState << +>>/ColorSpace << +/sRGB 43 0 R +>>>> +/Length 8754 +/Filter /FlateDecode +>> +stream +x???M?%?q???W??i???g?[9qb??, /{??R`? +??>?z??&????;e????x??????f?????????J4u????U?+?BH?????OF{;0??????B??|?6!?Y????Y?1}???? [TRUNCATED] To get the complete diff run: svnlook diff /svnroot/returnanalytics -r 2957 From noreply at r-forge.r-project.org Sat Aug 31 23:49:14 2013 From: noreply at r-forge.r-project.org (noreply at r-forge.r-project.org) Date: Sat, 31 Aug 2013 23:49:14 +0200 (CEST) Subject: [Returnanalytics-commits] r2958 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: R man Message-ID: <20130831214914.31E701853CF@r-forge.r-project.org> Author: braverock Date: 2013-08-31 23:49:13 +0200 (Sat, 31 Aug 2013) New Revision: 2958 Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.Autocorrelation.R pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/chart.Autocorrelation.Rd Log: - fix broken example Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.Autocorrelation.R =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.Autocorrelation.R 2013-08-31 21:38:47 UTC (rev 2957) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.Autocorrelation.R 2013-08-31 21:49:13 UTC (rev 2958) @@ -17,7 +17,7 @@ #' @keywords Autocorrelation lag factors #' @examples #' -#' data(edhec[,1]) +#' data(edhec) #' chart.Autocorrelation(edhec[,1]) #' #' @rdname chart.Autocorrelation Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/chart.Autocorrelation.Rd =================================================================== --- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/chart.Autocorrelation.Rd 2013-08-31 21:38:47 UTC (rev 2957) +++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/chart.Autocorrelation.Rd 2013-08-31 21:49:13 UTC (rev 2958) @@ -1,44 +1,44 @@ -\name{chart.Autocorrelation} -\alias{chart.Autocorrelation} -\title{Stacked Bar Autocorrelation Plot} -\usage{ - chart.Autocorrelation(R) -} -\arguments{ - \item{R}{an xts, vector, matrix, data frame, timeSeries - or zoo object of an asset return} -} -\value{ - Stack Bar plot of lagged return coefficients -} -\description{ - A wrapper to create box and whiskers plot of lagged - autocorrelation analysis -} -\details{ - We have also provided controls for all the symbols and - lines in the chart. One default, set by - \code{as.Tufte=TRUE}, will strip chartjunk and draw a - Boxplot per recommendations by Burghardt, Duncan and - Liu(2013) -} -\examples{ -data(edhec[,1]) -chart.Autocorrelation(edhec[,1]) -} -\author{ - Peter Carl, Brian Peterson, Shubhankit Mohan -} -\references{ - Burghardt, G., and L. Liu, \emph{ It's the - Autocorrelation, Stupid (November 2012) Newedge working - paper.} - \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} -} -\seealso{ - \code{\link[graphics]{boxplot}} -} -\keyword{Autocorrelation} -\keyword{factors} -\keyword{lag} - +\name{chart.Autocorrelation} +\alias{chart.Autocorrelation} +\title{Stacked Bar Autocorrelation Plot} +\usage{ + chart.Autocorrelation(R) +} +\arguments{ + \item{R}{an xts, vector, matrix, data frame, timeSeries + or zoo object of an asset return} +} +\value{ + Stack Bar plot of lagged return coefficients +} +\description{ + A wrapper to create box and whiskers plot of lagged + autocorrelation analysis +} +\details{ + We have also provided controls for all the symbols and + lines in the chart. One default, set by + \code{as.Tufte=TRUE}, will strip chartjunk and draw a + Boxplot per recommendations by Burghardt, Duncan and + Liu(2013) +} +\examples{ +data(edhec) +chart.Autocorrelation(edhec[,1]) +} +\author{ + Peter Carl, Brian Peterson, Shubhankit Mohan +} +\references{ + Burghardt, G., and L. Liu, \emph{ It's the + Autocorrelation, Stupid (November 2012) Newedge working + paper.} + \url{http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf} +} +\seealso{ + \code{\link[graphics]{boxplot}} +} +\keyword{Autocorrelation} +\keyword{factors} +\keyword{lag} +