[Returnanalytics-commits] r1996 - in pkg/PerformanceAnalytics/sandbox/Meucci: . R data demo man

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Thu Jun 7 23:06:06 CEST 2012


Author: braverock
Date: 2012-06-07 23:06:06 +0200 (Thu, 07 Jun 2012)
New Revision: 1996

Added:
   pkg/PerformanceAnalytics/sandbox/Meucci/DESCRIPTION
   pkg/PerformanceAnalytics/sandbox/Meucci/R/
   pkg/PerformanceAnalytics/sandbox/Meucci/R/ButterflyTrading.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/CmaCopula.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/DetectOutliersviaMVE.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/EntropyProg.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/FullyFlexibleBayesNets.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/HermiteGrid.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/InvariantProjection.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/Prior2Posterior.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/RankingInformation.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/RobustBayesianAllocation.R
   pkg/PerformanceAnalytics/sandbox/Meucci/R/logToArithmeticCovariance.R
   pkg/PerformanceAnalytics/sandbox/Meucci/data/
   pkg/PerformanceAnalytics/sandbox/Meucci/data/FactorDistributions.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/data/MeucciFreaqEst.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/data/MeucciReturnsDistribution.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/data/MeucciTweakTest.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/data/butterflyTradingX.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/AnalyticalvsNumerical.R
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/ButterflyTrading.R
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/DetectOutliersviaMVE.R
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/FullyFlexibleBayesNets.R
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/HermiteGrid_demo.R
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/Prior2Posterior.R
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/RankingInformation.R
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/RobustBayesianAllocation.R
   pkg/PerformanceAnalytics/sandbox/Meucci/demo/logToArithmeticCovariance.R
   pkg/PerformanceAnalytics/sandbox/Meucci/man/
   pkg/PerformanceAnalytics/sandbox/Meucci/man/CMAcombination.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/CMAseparation.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/Central2Raw.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/ComputeMVE.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/ComputeMoments.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/CondProbViews.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/Cumul2Raw.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/DetectOutliersViaMVE.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/EfficientFrontier.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/EntropyProg.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/GenerateLogNormalDistribution.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/MvnRnd.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/NoisyObservations.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/PanicCopula.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/PartialConfidencePosterior.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/PlotDistributions.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/Prior2Posterior.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/ProjectInvariant.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/Raw2Central.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/Raw2Cumul.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/RejectOutlier.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/StackedBarChart.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/SummStats.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/Tweak.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/ViewRanking.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/efficientFrontier.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/linreturn.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/robustBayesianPortfolioOptimization.Rd
   pkg/PerformanceAnalytics/sandbox/Meucci/man/std.Rd
Removed:
   pkg/PerformanceAnalytics/sandbox/Meucci/FactorDistributions.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciAnalyticalvsNumerical.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciButterflyTrading.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciCmaCopula.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciEntropyProg.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciFreaqEst.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciFullFlexibleBayesNets.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciHermiteGrid.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciInvariantProjection.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciRankingInformation.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciReturnsDistribution.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciRobustBayesianAllocation.R
   pkg/PerformanceAnalytics/sandbox/Meucci/MeucciTweakTest.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/Meucci_DetectOutliersviaMVE.R
   pkg/PerformanceAnalytics/sandbox/Meucci/butterflyTradingX.rda
   pkg/PerformanceAnalytics/sandbox/Meucci/logToArithmeticCovariance.R
Modified:
   pkg/PerformanceAnalytics/sandbox/Meucci/Meucci_functions.csv
Log:


Added: pkg/PerformanceAnalytics/sandbox/Meucci/DESCRIPTION
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Meucci/DESCRIPTION	                        (rev 0)
+++ pkg/PerformanceAnalytics/sandbox/Meucci/DESCRIPTION	2012-06-07 21:06:06 UTC (rev 1996)
@@ -0,0 +1,38 @@
+Package: PerformanceAnalytics
+Type: Package
+Title: Econometric tools for performance and risk analysis.
+Version: 1.0.4.5
+Date: $Date: 2012-06-06 15:18:48 -0500 (Wed, 06 Jun 2012) $
+Author: Peter Carl, Brian G. Peterson
+Maintainer: Brian G. Peterson <brian at braverock.com>
+Description: Collection of econometric functions for
+    performance and risk analysis. This package aims to aid
+    practitioners and researchers in utilizing the latest
+    research in analysis of non-normal return streams.  In
+    general, it is most tested on return (rather than
+    price) data on a regular scale, but most functions will
+    work with irregular return data as well, and increasing
+    numbers of functions will work with P&L or price data
+    where possible.
+Depends:
+    R (>= 2.14.0),
+    zoo,
+    xts (>= 0.8)
+Suggests:
+    Hmisc,
+    MASS,
+    tseries,
+    quadprog,
+    sn,
+    robustbase,
+    quantreg,
+    gplots,
+    ff
+License: GPL
+URL: http://r-forge.r-project.org/projects/returnanalytics/
+Copyright: (c) 2004-2012
+Contributors: Kris Boudt,  Diethelm Wuertz, Eric Zivot, Matthieu Lestel
+Thanks: A special thanks for additional contributions from
+    Stefan Albrecht, Khahn Nygyen, Jeff Ryan,
+    Josh Ulrich, Sankalp Upadhyay, Tobias Verbeke, 
+    H. Felix Wittmann, Ram Ahluwalia

Deleted: pkg/PerformanceAnalytics/sandbox/Meucci/FactorDistributions.rda
===================================================================
(Binary files differ)

Deleted: pkg/PerformanceAnalytics/sandbox/Meucci/MeucciAnalyticalvsNumerical.R
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Meucci/MeucciAnalyticalvsNumerical.R	2012-06-07 15:54:36 UTC (rev 1995)
+++ pkg/PerformanceAnalytics/sandbox/Meucci/MeucciAnalyticalvsNumerical.R	2012-06-07 21:06:06 UTC (rev 1996)
@@ -1,140 +0,0 @@
-# TODO: Determine how to extend correlation view to multiple assets
-# TODO: Create plot distributions function
-
-PlotDistributions = function( X , p , Mu , Sigma , p_ , Mu_ , Sigma_ )
-    {
-    J = nrow( X )
-    N = ncol( X )
-
-    NBins = round( 10*log( J ) )
-    
-    for ( n in 1:N )
-        {        
-        # set ranges
-        xl = min(X[ , n ] )
-        xh = max(X[ , n ] )
-            #x = [xl : (xh-xl)/100 : xh]
-        
-        # posterior numerical
-        h3 = pHist(X[ ,n] , p_ , NBins )
-    
-        # posterior analytical        
-        h4 = plot( x , normpdf( x , Mu_[n] , sqrt( Sigma_[n,n] ) ) )        
-        
-        # prior analytical
-        h2 = plot( x , normpdf( x , Mu[n] ,sqrt( Sigma[n,n] ) ) )
-    
-            # xlim( cbind( xl , xh ) )
-            # legend([h3 h4 h2],'numerical', 'analytical', 'prior')
-        }
-    }
-
-
-#' Calculate the full-confidence posterior distributions of Mu and Sigma
-#'
-#' @param M     a numeric vector with the Mu of the normal reference model
-#' @param Q     a numeric vector used to construct a view on expectation of the linear combination Q %*% X
-#' @param M_Q   a numeric vector with the view of the expectations of QX
-#' @param S     a covariance matrix for the normal reference model
-#' @param G     a numeric vector used to construct a view on covariance of the linear combination G %*% X
-#' @param S_G   a numeric with the expectation associated with the covariance of the linear combination GX
-#'
-#' @return a list with 
-#'             M_   a numeric vector with the full-confidence posterior distribution of Mu
-#'             S_   a covariance matrix with the full-confidence posterior distribution of Sigma
-#'
-#' @references 
-#' \url{http://www.symmys.com}
-#' See Meucci script Prior2Posterior.m attached to Entropy Pooling Paper
-#' @author Ram Ahluwalia \email{ram@@wingedfootcapital.com}
-Prior2Posterior = function( M , Q , M_Q , S , G , S_G )
-    {  
-    # See Appendix A.1 formula 49 for derivation
-    M_ = M + S %*% t(Q) %*% solve( Q %*% S %*% t(Q) ) %*% (M_Q - Q %*% M)
-
-    # See Appendix A.1 formula 57 for derivation
-    S_= S + (S %*% t(G)) %*% ( solve( G %*% S %*% t(G) ) %*% S_G %*% solve( G %*% S %*% t(G) ) - solve( G%*%S%*%t(G) ) ) %*% ( G %*% S )
-
-    return ( list( M_ = M_ , S_ = S_ ) )    
-    }
-
-
-# This example script compares the numerical and the analytical solution of entropy-pooling, see
-# "A. Meucci - Fully Flexible Views: Theory and Practice 
-# and associated script S_Main
-# Example compares analytical vs. numerical approach to entropy pooling
-
-# Code by A. Meucci, September 2008
-# Last version available at www.symmys.com > Teaching > MATLAB
-
-###############################################################
-# prior
-###############################################################
-library( matlab )
-library( MASS )
-# analytical representation
-N = 2 # market dimension (2 assets)
-Mu = zeros( N , 1 )
-r= .6
-Sigma = ( 1 - r ) * eye( N ) + r * ones( N , N ) # nxn correlation matrix with correlaiton 'r' in off-diagonals
-
-# numerical representation
-J = 100000 # number of scenarios
-p = ones( J , 1 ) / J
-dd = mvrnorm( J / 2 , zeros( N , 1 ) , Sigma ) # distribution centered on (0,0) with variance Sigma
-X = ones( J , 1 ) %*% t(Mu) + rbind( dd , -dd ) # JxN matrix of scenarios
-
-###############################################################
-# views
-###############################################################
-
-# location
-Q = matrix( c( 1 , -1 ) , nrow = 1 ) # long the first and asset and short the second asset produces an expectation (of Mu_Q calculated below)
-Mu_Q = .5
-
-# scatter
-G = matrix( c( -1 , 1 ) , nrow = 1 )
-Sigma_G = .5^2
-
-###############################################################
-#  posterior 
-###############################################################
-
-# analytical posterior
-RevisedMuSigma = Prior2Posterior( Mu , Q , Mu_Q , Sigma , G , Sigma_G )
-Mu_ = RevisedMuSigma$M_
-
-# numerical posterior
-Aeq = ones( 1 , J )  # constrain probabilities to sum to one...
-beq = 1
-
-# create views
-    QX = X %*% t(Q) # a Jx1 matrix
-
-    Aeq = rbind( Aeq , t(QX) )    # ...constrain the first moments... 
-        # QX is a linear combination of vector Q and the scenarios X
-    
-    beq = rbind( beq , Mu_Q )
-    
-    SecMom = G %*% Mu_ %*% t(Mu_) %*% t(G) + Sigma_G  # ...constrain the second moments... 
-        # We use Mu_ from analytical result. We do not use Revised Sigma because we are testing whether
-        # the numerical approach for handling expectations of covariance matches the analytical approach
-        # TODO: Can we perform this procedure without relying on Mu_ from the analytical result?
-    GX = X %*% t(G)
-    
-    for ( k in 1:nrow( G ) )
-        {
-        for ( l in k:nrow( G ) )
-            {
-            Aeq = rbind( Aeq , t(GX[ , k ] * GX[ , l ] ) )
-            beq = rbind( beq , SecMom[ k , l ] )
-            }
-        }
-
-emptyMatrix = matrix( , nrow = 0 , ncol = 0 )
-p_ = EntropyProg( p , emptyMatrix , emptyMatrix , Aeq , beq ) # ...compute posterior probabilities
-
-###############################################################
-# plots
-###############################################################
-PlotDistributions( X , p , Mu , Sigma , p_ , Mu_ , Sigma_ )
\ No newline at end of file

Deleted: pkg/PerformanceAnalytics/sandbox/Meucci/MeucciButterflyTrading.R
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Meucci/MeucciButterflyTrading.R	2012-06-07 15:54:36 UTC (rev 1995)
+++ pkg/PerformanceAnalytics/sandbox/Meucci/MeucciButterflyTrading.R	2012-06-07 21:06:06 UTC (rev 1996)
@@ -1,445 +0,0 @@
-# Todo: plot efficient frontier in R
-
-PlotFrontier = function( e , s , w )
-  {
-  # subplot(2,1,1)
-  plot( s , e )
-  # grid on
-  # set(gca,'xlim',[min(s) max(s)])
-  # 
-  # subplot(2,1,2)
-  
-  xx = nrow( w ) ; N = ncol( w )
-  Data = apply( w , 1 , cumsum ) #TODO: Check. Take cumulative sum of *rows*. Try sapply?  
-  
-  for ( n in 1:N ) 
-    {
-    x = cbind( min(s) , s , max(s) )
-    y = cbind( 0 , Data[ , N-n+1 ] , 0 )
-    # hold on
-    #h = fill( x , y , cbind( .9 , .9 , .9) - mod( n , 3 ) %*% cbind( .2 , .2 , .2) )
-    }
-  
-  #set(gca,'xlim',[min(s) max(s)],'ylim',[0 max(max(Data))])
-  #xlabel('portfolio # (risk propensity)')
-  #ylabel('portfolio composition')
-  }
-
-ViewCurveSlope = function( X , p )
-  {
-  # view 3 (expectations and binding constraints): slope of the yield curve will increase by 5 bp
-    
-  J = nrow( X ) ; K = ncol( X )
-  
-  # constrain probabilities to sum to one...
-  Aeq = ones( 1 , J )
-  beq = 1
-  
-  # ...constrain the expectation...
-  V = X[ , 14 ] - X[ , 13 ]
-  v = .0005
-  
-  Aeq = rbind( Aeq , t(V) )
-      
-  beq = rbind( beq , v )
-  
-  A = b = emptyMatrix  
-  
-  # ...compute posterior probabilities
-  p_ = EntropyProg( p , A , b , Aeq ,beq )$p_  
-  return( p_ )
-  }
-     
-ViewRealizedVol = function( X , p )
-  {  
-  # view 2 (relative inequality view on median): bullish on realized volatility of MSFT (i.e. absolute log-change in the underlying). 
-  # This is the variable such that, if larger than a threshold, a long position in the butterfly turns into a profit (e.g. Rachev 2003)
-  # we issue a relative statement on the media comparing it with the third quintile implied by the reference market model
-    
-  library( matlab )
-  J = nrow( X ) ; K = ncol( X )
-  
-  # constrain probabilities to sum to one...
-  Aeq = ones( 1 , J )
-  beq = 1
-  
-  # ...constrain the median...  
-  V = abs( X[ , 1 ] ) # absolute value of the log of changes in MSFT close prices (definition of realized volatility)  
-  
-  V_Sort = sort( V , decreasing = FALSE ) # sorting of the abs value of log changes in prices from smallest to largest
-  I_Sort = order( V ) 
-  
-  F = cumsum( p[ I_Sort ] ) # represents the cumulative sum of probabilities from ~0 to 1
-  
-  I_Reference = max( matlab:::find( F <= 3/5 ) ) # finds the (max) index corresponding to element with value <= 3/5 along the empirical cumulative density function for the abs log-changes in price
-  V_Reference = V_Sort[ I_Reference ] # returns the corresponding abs log of change in price at the 3/5 of the cumulative density function
-
-  I_Select = find( V <= V_Reference ) # finds all indices with value of abs log-change in price less than the reference value
-  
-  a = zeros( 1 , J )
-  a[ I_Select ] = 1 # select those cases where the abs log-change in price is less than the 3/5 of the empirical cumulative density...
-  
-  A = a
-  b = .5 # ... and assign the probability of these cases occuring as 50%. This moves the media of the distribution
-  
-  # ...compute posterior probabilities
-  p_ = EntropyProg( p , A , b , Aeq , beq )$p_
-  
-  return( p_ )
-  }
-
-ViewImpliedVol = function( X , p )
-  {    
-  # View 1 (inequality view): bearish on on 2m-6m implied volaility spread for Google
-    
-  J = nrow( X ) ; K = ncol( X )
-    
-  # constrain probabilities to sum to one...
-    Aeq = ones( 1 , J )
-    beq = 1 
-  
-  # ...constrain the expectation...
-    V = X[ , 12 ] - X[ , 11 ] # GOOG_vol_182 (6m implied vol) - GOOG_vol_91 (2m implied vol)
-    m = mean( V )
-    s = std( V )
-    
-    A = t( V )
-    b = m - s
-  
-  # ...compute posterior probabilities
-    p_ = EntropyProg( p , A , b , Aeq , beq )$p_
-  
-  return( p_ )
-  }
-
-ComputeCVaR = function( Units , Scenarios , Conf )
-  {
-  PnL = Scenarios %*% Units
-  Sort_PnL = PnL[ order( PnL , decreasing = FALSE ) ] # DOUBLE CHECK IF I SHOULD USE ORDER INSTEAD OF SORT
-  
-  J = length( PnL )
-  Cut = round( J %*% ( 1 - Conf ) , 0 )
-  
-  CVaR = -mean( Sort_PnL[ 1:Cut ] )
-
-  return( CVaR )
-  }
-
-LongShortMeanCVaRFrontier = function( PnL , Probs , Butterflies , Options )
-  {
-  library( matlab )
-  library( quadprog )
-  library( limSolve )
-  
-  # setup constraints
-  J = nrow(PnL); N = ncol(PnL)
-  P_0s = matrix(  , nrow = 1 , ncol = 0 )
-  D_s  = matrix(  , nrow = 1 , ncol = 0 )
-  emptyMatrix = matrix( nrow = 0 , ncol = 0 )
-  
-  for ( n in 1:N )
-    {
-    P_0s = cbind( P_0s , Butterflies[[n]]$P_0 ) # 1x9 matrix
-    D_s = cbind( D_s , Butterflies[[n]]$Delta ) # 1x9 matrix
-    }
-  
-  Constr = list()
-  Constr$Aeq = P_0s # linear coefficients in the constraints Aeq*X = beq (equality constraints)
-  Constr$beq = Options$Budget # the constant vector in the constraints Aeq*x = beq
-
-  if ( Options$DeltaNeutral == TRUE ) 
-    {
-    Constr$Aeq = rbind( Constr$Aeq , D_s ) # 2x9 matrix
-    Constr$beq = rbind( Constr$beq , 0 )   # 2x9 matrix
-    }
-  
-  Constr$Aleq = rbind( diag( as.vector( P_0s ) ) , -diag( as.vector( P_0s ) ) ) # linear coefficients in the constraints A*x <= b. an 18x9 matrix
-  Constr$bleq = rbind( Options$Limit * ones(N,1) , Options$Limit * ones(N,1) ) # constant vector in the constraints A*x <= b. an 18x1 matrix
-  
-  # determine expectation of minimum-variance portfolio
-  Exps = t(PnL) %*% Probs
-  Scnd_Mom = t(PnL) %*% (PnL * (Probs %*% ones(1,N) ) )
-  Scnd_Mom = ( Scnd_Mom + t(Scnd_Mom) ) / 2
-  Covs = Scnd_Mom - Exps %*% t(Exps)
-
-  Amat = rbind( Constr$Aeq , Constr$Aleq ) # stack the equality constraints on top of the inequality constraints
-  bvec = rbind( Constr$beq , Constr$bleq ) # stack the equality constraints on top of the inequality constraints
-  
-  #if ( nrow(Covs) != length( zeros(N,1) ) ) stop("Dmat and dvec are incompatible!")
-  #if ( nrow(Covs) != nrow(Amat)) stop("Amat and dvec are incompatible!")
-  
-  MinSDev_Units = solve.QP( Dmat = Covs , dvec = -1 * zeros(N,1) , Amat = -1*t(Amat) , bvec = -1*bvec , meq = length( Constr$beq) ) # TODO: Check this
-  MinSDev_Exp = t( MinSDev_Units$solution ) %*% Exps
-  
-  # determine expectation of maximum-expectation portfolio
-  
-  MaxExp_Units = linp( E = Constr$Aeq , F = Constr$beq , G = -1*Constr$Aleq , H = -1*Constr$bleq , Cost = -Exps , ispos = FALSE )$X 
-    
-  MaxExp_Exp = t( MaxExp_Units ) %*% Exps
-  
-  # slice efficient frontier in NumPortf equally thick horizontal sections
-  Grid = t( seq( from = Options$FrontierSpan[1] , to = Options$FrontierSpan[2] , length.out = Options$NumPortf ) )
-  TargetExp = as.numeric( MinSDev_Exp ) + Grid * as.numeric( ( MaxExp_Exp - MinSDev_Exp ) )
-  
-  # compute composition, expectation, s.dev. and CVaR of the efficient frontier
-  Composition = matrix( , ncol = N , nrow = 0 )
-  Exp = matrix( , ncol = 1 , nrow = 0 )
-  SDev = matrix( , ncol = 1 , nrow = 0 )
-  CVaR = matrix( , ncol = 1 , nrow = 0 )
-  
-  for (i in 1:Options$NumPortf )
-  {
-    # determine least risky portfolio for given expectation
-    AEq = rbind( Constr$Aeq , t(Exps) )        # equality constraint: set expected return for each asset...
-    bEq = rbind( Constr$beq , TargetExp[i] )
-
-    Amat = rbind( AEq , Constr$Aleq ) # stack the equality constraints on top of the inequality constraints
-    bvec = rbind( bEq , Constr$bleq ) # ...and target portfolio return for i'th efficient portfolio
-    
-    # Why is FirstDegree "expected returns" set to 0? 
-        # Becasuse we capture the equality view in the equality constraints matrix
-        # In other words, we have a constraint that the Expected Returns by Asset %*% Weights = Target Return
-    Units = solve.QP( Dmat = Covs , dvec = -1*zeros(N,1) , Amat = -1*t(Amat) , bvec = -1*bvec , meq = length( bEq ) )
-          
-    # store results
-    Composition = rbind( Composition , t( Units$solution ) )
-
-    Exp = rbind( Exp , t( Units$solution ) %*% Exps )
-    SDev = rbind( SDev , sqrt( t( Units$solution ) %*% Covs %*% Units$solution ) )
-    CVaR = rbind( CVaR , ComputeCVaR( Units$solution , PnL , Options$Quant ) )
-  }   
-
-  colnames( Composition ) = c( "MSFT_vol_30" , "MSFT_vol_91" , "MSFT_vol_182" , 
-                               "YHOO_vol_30" , "YHOO_vol_91" , "YHOO_vol_182" ,    
-                               "GOOG_vol_30" , "GOOG_vol_91" , "GOOG_vol_182" )
-  
-  return( list( Exp = Exp , SDev = SDev , CVaR = CVaR , Composition = Composition ) )
-  }
-
-
-MapVol = function( sig , y , K , T )
-  {
-  # in real life a and b below should be calibrated to security-specific time series
-
-  a=-.00000000001
-  b= .00000000001 
-
-  s = sig + a/sqrt(T) * ( log(K) - log(y) ) + b/T*( log(K) - log(y) )^2
-
-  return( s )
-  }
-
-HorizonPricing = function( Butterflies , X )
-{
-r = .04       # risk-free rate
-tau = 1/252   # investment horizon
-
-#  factors: 1. 'MSFT_close'   2. 'MSFT_vol_30'   3. 'MSFT_vol_91'   4. 'MSFT_vol_182'
-#  securities:                1. 'MSFT_vol_30'   2. 'MSFT_vol_91'   3. 'MSFT_vol_182'
-
-# create a new row called DlnY and Dsig
-# create a new row called 'DlnY'. Assign the first row (vector) of X to this DlnY for the 1:3 securities
-for ( s in 1:3 ) { Butterflies[[s]]$DlnY = X[ , 1 ] }
-
-# assign the 2nd row of X to a new element called Dsig
-Butterflies[[1]]$Dsig=X[ , 2 ]
-Butterflies[[2]]$Dsig=X[ , 3 ]
-Butterflies[[3]]$Dsig=X[ , 4 ]
-
-#  factors:  5. 'YHOO_close'   6. 'YHOO_vol_30'   7. 'YHOO_vol_91'   8. 'YHOO_vol_182'
-#  securities:                 4. 'YHOO_vol_30'   5. 'YHOO_vol_91'   6. 'YHOO_vol_182'
-for ( s in 4:6 ) { Butterflies[[s]]$DlnY=X[ , 5 ] }
-
-Butterflies[[4]]$Dsig=X[ , 6 ]
-Butterflies[[5]]$Dsig=X[ , 7 ]
-Butterflies[[6]]$Dsig=X[ , 8 ]
-
-#  factors:  #  9. 'GOOG_close'  10. 'GOOG_vol_30'  11. 'GOOG_vol_91'  12. 'GOOG_vol_182'
-#  securities:                    7. 'GOOG_vol_30'   8. 'GOOG_vol_91'   9.  'GOOG_vol_182'
-for ( s in 7:9 ) { Butterflies[[s]]$DlnY=X[ , 9 ] }
-
-Butterflies[[7]]$Dsig=X[ , 10 ]
-Butterflies[[8]]$Dsig=X[ , 11 ]
-Butterflies[[9]]$Dsig=X[ , 12 ]
-
-PnL = matrix( NA , nrow = nrow(X) )
-
-for ( s in 1:length(Butterflies) )
-    {  
-    Y = Butterflies[[s]]$Y_0 * exp(Butterflies[[s]]$DlnY)
-    ATMsig = apply( cbind( Butterflies[[s]]$sig_0 + Butterflies[[s]]$Dsig , 10^-6 ) , 1 , max )     
-    t = Butterflies[[s]]$T - tau
-    K = Butterflies[[s]]$K
-    sig = MapVol(ATMsig , Y , K , t )
-    
-    # library(RQuantLib) # this function can only operate on one option at a time, so we use fOptions    
-    # C = EuropeanOption( type = "call" , underlying = Y , strike = K , dividendYield = 0 , riskFreeRate = r , maturity = t , volatility = sig )$value
-    # P = EuropeanOption( type = "put" ,  underlying = Y , strike = K , dividendYield = 0 , riskFreeRate = r , maturity = t , volatility = sig )$value
-    
-    # use fOptions to value options
-    library( fOptions )
-    C = GBSOption( TypeFlag = "c" , S = Y , X = K , r = r , b = 0 , Time = t , sigma = sig  )
-    P = GBSOption( TypeFlag = "p" , S = Y , X = K , r = r , b = 0 , Time = t , sigma = sig  )    
-
-    Butterflies[[s]]$P_T = C at price + P at price
-    PnL = cbind( PnL , Butterflies[[s]]$P_T )
-    }
-    PnL = PnL[ , -1 ]
-}
-
-#' This script performs the butterfly-trading case study for the 
-#' Entropy-Pooling approach by Attilio Meucci, as it appears in 
-#' "A. Meucci - Fully Flexible Views: Theory and Practice -
-#' The Risk Magazine, October 2008, p 100-106"
-#' available at www.symmys.com > Research > Working Papers
-#' Adapted from Code by A. Meucci, September 2008
-#' Last version available at www.symmys.com > Teaching > MATLAB
-
-
-###################################################################
-#' Load panel X of joint factors realizations and vector p of respective probabilities
-#' In real life, these are provided by the estimation process
-###################################################################
-load("butterflyTradingX.rda")
-
-FactorNames = c('MSFT_close' , 'MSFT_vol_30' , 'MSFT_vol_91' , 'MSFT_vol_182' , 
-                'YHOO_close' , 'YHOO_vol_30' , 'YHOO_vol_91' , 'YHOO_vol_182' ,
-                'GOOG_close' , 'GOOG_vol_30' , 'GOOG_vol_91' , 'GOOG_vol_182' , 
-                'USD SWAP 2Y rate' , 'USD SWAP 10Y rate' )
-
-p = matrix( rep( 1 / 100224 , 100224 ) , ncol = 1 ) # creates a Jx1 matrix of equal probabilities for each scenario
-
-ans = 1 / 100224
-
-library( R.matlab )
-library( matlab )
-
-emptyMatrix = matrix( nrow = 0 , ncol = 0 )
-
-###################################################################
-# Load current prices, deltas and other analytics of the securities 
-# In real life, these are provided by data provider
-###################################################################
-
-load( "FactorDistributions.rda" )
-
-# create Butterflies as a list of named arrays
-Butterflies = as.matrix( Butterflies[[1]] , nrow = 8 , ncol = 9 )
-Butterflies = matrix(Butterflies, ncol = 9 , nrow = 8 )
-rownames( Butterflies ) = c( "Name" , "P_0" , "Y_0" , "K" , "T" , "sig_0" , "Delta" , "Vega" )
-colnames( Butterflies ) = c( "MSFT_vol_30" , "MSFT_vol_91" , "MSFT_vol_182" , 
-                             "YHOO_vol_30" , "YHOO_vol_91" , "YHOO_vol_182" ,	
-                             "GOOG_vol_30" , "GOOG_vol_91" , "GOOG_vol_182" )
-
-colnames( X ) = FactorNames
-Butterflies = lapply( seq_len( ncol( Butterflies ) ), function( i ) Butterflies[ , i ] )
-
-###################################################################
-# Map factors scenarios into p&l scenarios at the investment horizon
-# In real life with complex products, the pricing can be VERY costly 
-###################################################################
-PnL = HorizonPricing( Butterflies , X )
-
-###################################################################
-# compute mean-risk efficient frontier based on prior market distribution
-###################################################################
-Options = list()
-Options$Quant = .95  # quantile level for CVaR
-Options$NumPortf = 30  # number of portfolios in efficient frontier
-Options$FrontierSpan = array( c(.1 , .8) ) # range of normalized exp.vals. spanned by efficient frontier
-Options$DeltaNeutral = 1 # 1 = portfolio is delta neutral, 0 = no constraint
-Options$Budget = 0 # initial capital
-Options$Limit = 10000 # limit to absolute value of each investment
-
-# The following commands loads the PnL object as calculated by the MATLAB option-pricing model
-    # This is for testing purposes so we can replicate results with the Meucci program
-load("C:\\Users\\Ram Ahluwalia\\Documents\\Applications\\R\\r-user\\MeucciButterflyPnL.rda")
-
-optimalPortfolios = LongShortMeanCVaRFrontier( PnL , p , Butterflies , Options )
-
-View( optimalPortfolios ) # Note that composition is measured in dollars. Here we are short GOOG_vol_91 and long GOOG_vol_182
-
-# plot efficient frontier ( Exp , CVaR , w )
-
-###################################################################
-# process views (this is the core of the Entropy Pooling approach
-###################################################################
-p_1 = ViewImpliedVol( X , p )
-    # View 1 (inequality view): bearish on on 2m-6pm implied volaility spread for Google
-    # Note that the mean-CVaR efficient frontier is long based on the reference model
-    # After processing the view, the G6m-G2m spread is now short in the mean-CVaR long-short efficient frontier
-
-p_2 = ViewRealizedVol( X , p )
-    # view 2 (relative inequality view on median): bullish on realized volatility of MSFT (i.e. absolute log-change in the underlying). 
-    # This is the variable such that, if larger than a threshold, a long position in the butterfly turns into a profit (e.g. Rachev 2003)
-    # we issue a relative statement on the media comparing it with the third quintile implied by the reference market model
-
-p_3 = ViewCurveSlope( X , p )
-    # view 3 (equality view - expectations and binding constraints): slope of the yield curve will increase by 5 bp
- 
-# assign confidence to the views and pool opinions
-    # sum of weights should be equal 1
-    # .35 is the weight on the reference model
-    # .2 is the confidence on View 1; .25 is the confidence on View 2; .2 is the confidence on View 3
-    c = matrix( c( .35 , .2 , .25 , .2 ) , nrow = 1 ) 
-
-p_= cbind( p , p_1 , p_2 , p_3 ) %*% t(c) # compute the uncertainty weighted posterior probabilities
-
-###################################################################
-# compute mean-risk efficient frontier based on posterior market distribution
-###################################################################
-optimalPortfoliosPosterior = LongShortMeanCVaRFrontier( PnL , p_ , Butterflies , Options )
-
-View( optimalPortfoliosPosterior ) # Note that composition is measured in dollars. Now we are long GOOG_vol_91, and short Goog_vol_182
-
-# plot efficient frontier ( Exp_ , CVaR_ , w_ )
-PlotFrontier(Exp_,SDev_,w_)
-
-###################################################################
-# tests
-###################################################################
-p_3b = ViewCurveSlope( X , p )
-p_4 = ViewCurveSlopeTest( X , p )
-
-ViewCurveSlopeTest = function( X , p )
-  { 
-  J = nrow( X ) ; K = ncol( X )
-  
-  # constrain probabilities to sum to one...
-  Aeq = ones( 1 , J )
-  beq = matrix( 1 , nrow = 1 , ncol = 1 )
-  browser()
-  # ...constrain the expectation...
-  V = matrix( , nrow = nrow( X ) , ncol = 0 )  
-  # Add 3 equality views
-  V = cbind( V , X[ , 14 ] - X[ , 13 ] ) # View 1: spread on treasuries
-  V = cbind( V , X[ , 14 ] - X[ , 13 ] ) # View 2: identical view (spread on treasuries)
-  V = cbind( V , X[ , 6 ] - X[ , 5 ] )   # View 3: difference in YHOO Vol
-  v = matrix( c( .0005 , 0 ) , nrow = ncol( V ) , ncol = 1 )
-  
-  Aeq = rbind( Aeq , t(V) )
-      
-  beq = rbind( beq , v )
-  
-  # add an inequality view
-    # ...constrain the median...  
-  V = abs( X[ , 1 ] ) # absolute value of the log of changes in MSFT close prices (definition of realized volatility)    
-  V_Sort = sort( V , decreasing = FALSE ) # sorting of the abs value of log changes in prices from smallest to largest
-  I_Sort = order( V ) 
-  
-  F = cumsum( p[ I_Sort ] ) # represents the cumulative sum of probabilities from ~0 to 1
-  
-  I_Reference = max( matlab:::find( F <= 3/5 ) ) # finds the (max) index corresponding to element with value <= 3/5 along the empirical cumulative density function for the abs log-changes in price
-  V_Reference = V_Sort[ I_Reference ] # returns the corresponding abs log of change in price at the 3/5 of the cumulative density function
-
-  I_Select = find( V <= V_Reference ) # finds all indices with value of abs log-change in price less than the reference value  
-  a = zeros( 1 , J )
-  a[ I_Select ] = 1 # select those cases where the abs log-change in price is less than the 3/5 of the empirical cumulative density...
-  
-  A = a
-  b = .5 # ... and assign the probability of these cases occuring as 50%. This moves the media of the distribution
-  
-  # ...compute posterior probabilities
-  p_ = EntropyProg( p , A , b , Aeq ,beq )
-  return( p_ )
-  }

Deleted: pkg/PerformanceAnalytics/sandbox/Meucci/MeucciCmaCopula.R
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Meucci/MeucciCmaCopula.R	2012-06-07 15:54:36 UTC (rev 1995)
+++ pkg/PerformanceAnalytics/sandbox/Meucci/MeucciCmaCopula.R	2012-06-07 21:06:06 UTC (rev 1996)
@@ -1,410 +0,0 @@
-#' Risk and Asset Allocation functions
-#'
-#' \tabular{ll}{
-#' Package: \tab Atillio Meucci Risk functions\cr
-#' Type: \tab Package\cr
-#' Version: \tab 0.1\cr
-#' Date: \tab 2011-11-22\cr
-#' LazyLoad: \tab yes\cr
-#' }
-#'
-#' Contains a collection of functions to implement code from Atillio Meucci's text
-#' "Risk and Asset Allocation". See www.symmys.com for latest MATLAB source code and 
-#' more information. Special thanks to Atillio Meucci and www.symmys.com 
-#' for providing the MATLAB source on which most of the functions are based
-#' 
-#' @name RiskAssetAllocation-package
-#' @aliases RiskAssetAllocation CMAcopula 
-#' @docType package
-#' @title Implements the CMA copula based on Meucci (2011)
-#' @author Ram Ahluwalia \email{rahluwalia@@gmail.com}
-#' @references 
-#' \url{http://www.symmys.com}
-#' @keywords package meucci copula entropy-pooling entropy CMA panic
-#' @examples
-#'         print(" This is a test example at the top of the meuccicopula.R file ")
-# library(roxygen)
-library(roxygen2)
-roxygen:::roxygen()
-
-# Depends: \tab \cr matlab  MASS roxygen2 trust pracma Hmisc Matrix \cr
-
-# fix documentation for @return in PanicCopula()
-# plot shape of the copula (perspective or surface plot)
-# fix MvnRnd function (Schur decomposition)
-# fix warnings
-
-#' Generates normal simulations whose sample moments match the population moments
-#'
-#' Adapted from file 'MvnRnd.m'. Most recent version of article and code available at http://www.symmys.com/node/162
-#' see A. Meucci - "Simulations with Exact Means and Covariances", Risk, July 2009
-#'
-#' @param M         a numeric indicating the sample first moment of the distribution
-#' @param S         a covariance matrix
-#' @param J         a numeric indicating the number of trials
-#'
-#' @author Ram Ahluwalia \email{rahluwalia@@gmail.com}
-#' @references 
-#' \url{http://www.symmys.com}
-#' TODO: Add Schur decomposition. Right now function is only sampling from mvrnorm so sample moments do no match population moments
-#' I have sample code commented out below to implement this correctly but I require a function that returns the unitaryMatrix from a Schur decomposition
-MvnRnd = function( M , S , J )
-  {
-  library(MASS)
-  X = MASS::mvrnorm( n = J , mu = M , Sigma = S ) # Todo: need to swap with Meucci function and Schur method
-  return( X = X )
-
-    # # compute sample covariance: NOTE defined as "cov(Y,1)", not as "cov(Y)"
-    # S_ = cov( Y , 1 )
-    # 
-    # # solve Riccati equation using Schur method
-    #     zerosMatrix = matrix( rep( 0 , length( N * N ) ) , nrow = N )
-    #     # define the Hamiltonian matrix
-    #     H1 = cbind( zerosMatrix , -1*S_ )
-    #     H2 = cbind( -S , zerosMatrix ) 
-    #     H = rbind( H1 , H2 )
-    #     # perform its Schur decomposition. 
-    #     # TODO: check that the result returns real eigenvalues on the diagonal. ?Schur seems to give an example with real eigenvalues
-    #     schurDecomp = Schur( H )
-    #     T = SchurDecomp
-    #     # U_ = unitaryMatrix??? TODO: Find a function in R that returns the unitaryMatrix from a Schur decomposition
-    #     # U = ordschur(U_,T_,'lhp')
-    #     # U_lu = U(1:N,1:N)
-    #     # U_ld = U(N+1:end,1:N)
-    #     # B = U_ld/U_lu
-    # 
-    # # affine transformation to match mean and covariances
-    # # X = Y%*%B + repmat(M', J , 1 ) 
-    #
-  }
-
-
-#' CMA separation. Decomposes arbitrary joint distributions (scenario-probabilities) into their copula and marginals
-#'
-#' The CMA separation step attains from the cdf "F" for the marginal "X", the scenario-probabilities representation 
-#'      of the copula (cdf of U: "F") and the inter/extrapolation representation of the marginal CDF's
-#'
-#' Separation step of Copula-Marginal Algorithm (CMA)
-#' Meucci A., "New Breed of Copulas for Risk and Portfolio  Management", Risk, September 2011
-#' Most recent version of article and code available at http://www.symmys.com/node/335
-#'
-#' @param   X    A matrix where each row corresponds to a scenario/sample from a joint distribution. 
-#'                    Each column represents the value from a marginal distribution
-#' @param   p    A 1-column matrix of probabilities of the Jth-scenario joint distribution in X
-#'
-#' @return  xdd  a JxN matrix where each column consists of each marginal's generic x values in ascending order
-#' @return  udd  a JxN matrix containing the cumulative probability (cdf) for each marginal by column - it is rescaled by 'l' to be <1 at the far right of the distribution
-#'                  can interpret 'udd' as the probability weighted grade scenarios (see formula 11 in Meucci)
-#' @return  U    a copula (J x N matrix) - the joint distribution of grades defined by feeding the original variables X into their respective marginal CDF
-#'
-#'
-#' @author Ram Ahluwalia \email{rahluwalia@@gmail.com}
-#' @references 
-#' \url{http://www.symmys.com}
-CMAseparation = function( X , p ) 
-{
-# @example     test = cbind( seq( 102 , 1 , -2 ) , seq( 100 , 500 , 8 ) )
-# @example     prob = rep(.02 , 51 )
-# @example     CMAseparation( test , prob )
-
-library(pracma)
-# preprocess variables
-    X = as.matrix( X ) # J x N matrix. One column for each marginal distribution, J scenarios
-    p = as.matrix( p ) # J x 1 matrix of twisted probabilities
-    J = nrow( X )
-    N = ncol( X )
-    l = J / ( J + 1 ) # Laplace
-
-    # To ensure that no scenario in the prior is strictly impossible under the prior distribution we take the max of the prior or epsilon
-        # This is necessary otherwise one cannot apply Bayes rule if the prior probability of a scenario is 0
-    p = apply( cbind( p , 1 / J * 10^(-8) ) , 1 , max ) # return a J x 1 matrix where each row is the max of the row in p or 1 / J * 10^(-8)    
-
-    p = p / sum(p) # Return a re-scaled vector of probabilities summing to 1
-    u = 0 * X # initialize a J x N matrix of 0's
-    U = 0 * X
-
-# begin core algorithm        
-    # initialize empty J x N matrices
-    x = matrix( nrow = J , ncol = N ) 
-    Indx = matrix( nrow = J , ncol = N )
-
-    for ( n in 1:N ) # for each marginal...
-        {
[TRUNCATED]

To get the complete diff run:
    svnlook diff /svnroot/returnanalytics -r 1996


More information about the Returnanalytics-commits mailing list