[Returnanalytics-commits] r2654 - in pkg/Meucci: . R demo man

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Sun Jul 28 13:28:31 CEST 2013


Author: xavierv
Date: 2013-07-28 13:28:31 +0200 (Sun, 28 Jul 2013)
New Revision: 2654

Added:
   pkg/Meucci/R/FitMultivariateGarch.R
   pkg/Meucci/R/MvnRnd.R
   pkg/Meucci/demo/S_EquitiesInvariants.R
   pkg/Meucci/demo/S_ProjectNPriceMvGarch.R
   pkg/Meucci/demo/S_Wishart.R
   pkg/Meucci/man/FitMultivariateGarch.Rd
   pkg/Meucci/man/garch1f4.Rd
   pkg/Meucci/man/garch2f8.Rd
Modified:
   pkg/Meucci/DESCRIPTION
   pkg/Meucci/NAMESPACE
   pkg/Meucci/R/CmaCopula.R
Log:
- added more demo scripts from chapters 2 and 3 and its associated functions

Modified: pkg/Meucci/DESCRIPTION
===================================================================
--- pkg/Meucci/DESCRIPTION	2013-07-28 11:16:20 UTC (rev 2653)
+++ pkg/Meucci/DESCRIPTION	2013-07-28 11:28:31 UTC (rev 2654)
@@ -4,7 +4,7 @@
     Meucci.
 Version: 0.2.2
 Date: $Date: 2012-06-06 15:18:48 -0500 (Wed, 06 Jun 2012) $
-Author: Ram Ahluwalia, Manan Shah, Xavier Vals
+Author: Ram Ahluwalia, Manan Shah, Xavier Valls
 Maintainer: Brian G. Peterson <brian at braverock.com>
 Description: Attilio Meucci is a thought leader in advanced risk and portfolio
     management. His innovations include Entropy Pooling (technique for fully
@@ -32,9 +32,10 @@
     pracma,
     R.utils,
     mvtnorm,
-    dlm
+    dlm,
+    quadprog,
+    signal
 Suggests:
-    quadprog,
     limSolve,
     Matrix,
     MASS,
@@ -47,7 +48,6 @@
     expm,
     latticeExtra,
     scatterplot3d,
-    psych
 License: GPL
 URL: http://r-forge.r-project.org/projects/returnanalytics/
 Copyright: (c) 2012
@@ -86,3 +86,5 @@
     'GenerateUniformDrawsOnUnitSphere.R'
     'PlotMarginalsNormalInverseWishart.R'
     'RandNormalInverseWishart.R'
+    'FitMultivariateGarch.R'
+    'MvnRnd.R'

Modified: pkg/Meucci/NAMESPACE
===================================================================
--- pkg/Meucci/NAMESPACE	2013-07-28 11:16:20 UTC (rev 2653)
+++ pkg/Meucci/NAMESPACE	2013-07-28 11:28:31 UTC (rev 2654)
@@ -11,6 +11,9 @@
 export(DetectOutliersViaMVE)
 export(EntropyProg)
 export(FitExpectationMaximization)
+export(FitMultivariateGarch)
+export(garch1f4)
+export(garch2f8)
 export(GenerateLogNormalDistribution)
 export(GenerateUniformDrawsOnUnitSphere)
 export(hermitePolynomial)

Modified: pkg/Meucci/R/CmaCopula.R
===================================================================
--- pkg/Meucci/R/CmaCopula.R	2013-07-28 11:16:20 UTC (rev 2653)
+++ pkg/Meucci/R/CmaCopula.R	2013-07-28 11:28:31 UTC (rev 2654)
@@ -4,52 +4,7 @@
 # fix MvnRnd function (Schur decomposition)
 # fix warnings
 
-#' Generates normal simulations whose sample moments match the population moments
-#'
-#' Adapted from file 'MvnRnd.m'. Most recent version of article and code available at http://www.symmys.com/node/162
-#' see A. Meucci - "Simulations with Exact Means and Covariances", Risk, July 2009
-#'
-#' @param M         a numeric indicating the sample first moment of the distribution
-#' @param S         a covariance matrix
-#' @param J         a numeric indicating the number of trials
-#'
-#' @author Ram Ahluwalia \email{rahluwalia@@gmail.com}
-#' @references 
-#' \url{http://www.symmys.com}
-#' TODO: Add Schur decomposition. Right now function is only sampling from mvrnorm so sample moments do no match population moments
-#' I have sample code commented out below to implement this correctly but I require a function that returns the unitaryMatrix from a Schur decomposition
-#' @export
-MvnRnd = function( M , S , J )
-{
-  library(MASS)
-  X = MASS::mvrnorm( n = J , mu = M , Sigma = S ) # Todo: need to swap with Meucci function and Schur method
-  return( X = X )
-    
-    # # compute sample covariance: NOTE defined as "cov(Y,1)", not as "cov(Y)"
-    # S_ = cov( Y , 1 )
-    # 
-    # # solve Riccati equation using Schur method
-    #     zerosMatrix = matrix( rep( 0 , length( N * N ) ) , nrow = N )
-    #     # define the Hamiltonian matrix
-    #     H1 = cbind( zerosMatrix , -1*S_ )
-    #     H2 = cbind( -S , zerosMatrix ) 
-    #     H = rbind( H1 , H2 )
-    #     # perform its Schur decomposition. 
-    #     # TODO: check that the result returns real eigenvalues on the diagonal. ?Schur seems to give an example with real eigenvalues
-    #     schurDecomp = Schur( H )
-    #     T = SchurDecomp
-    #     # U_ = unitaryMatrix??? TODO: Find a function in R that returns the unitaryMatrix from a Schur decomposition
-    #     # U = ordschur(U_,T_,'lhp')
-    #     # U_lu = U(1:N,1:N)
-    #     # U_ld = U(N+1:end,1:N)
-    #     # B = U_ld/U_lu
-    # 
-    # # affine transformation to match mean and covariances
-    # # X = Y%*%B + repmat(M', J , 1 ) 
-    #
-}
 
-
 #' CMA separation. Decomposes arbitrary joint distributions (scenario-probabilities) into their copula and marginals
 #'
 #' The CMA separation step attains from the cdf "F" for the marginal "X", the scenario-probabilities representation 

Added: pkg/Meucci/R/FitMultivariateGarch.R
===================================================================
--- pkg/Meucci/R/FitMultivariateGarch.R	                        (rev 0)
+++ pkg/Meucci/R/FitMultivariateGarch.R	2013-07-28 11:28:31 UTC (rev 2654)
@@ -0,0 +1,793 @@
+#' Estimation of multivariate GARCH models
+#'
+#'	@param    returns : [matrix] (T x N) returns so rows must correspond to time and columns to assets
+#'	@param    demean  : [scalar] specifies whether returns should be demeaned (if demean = 1) or not to estimate the model; default value is 1.
+#'	@param    eps     : [scalar] used in enforcing a_ii + b_ii <= 1 - eps; the default value is zero
+#'	@param    df      : [scalar] degree of freedom for the t-distribution; the default value is 500 to make it, basically, normal
+#'	
+#'	@return   mu      : [vector]
+#'	@return   ATMF    : [matrix] coefficient matrix A-tilde (in the notation of the paper)
+#'	@return   BTMF    : [matrix] coefficient matrix B-tilde (in the notation of the paper)
+#'	@return   CTMF    : [matrix] coefficient matrix C-tilde (in the notation of the paper)
+#'	@return   Hhat    : [matrix] forecasted conditional covariance matrix
+#'
+#' @note Initially written by Olivier Ledoit and Michael Wolf
+#'
+#' @references
+#' \url{http://symmys.com/node/170}
+#' See Meucci's script for "FitMultivariateGarch.m"
+#'
+#' @author Xavier Valls \email{flamejat@@gmail.com}
+#' @export
+
+FitMultivariateGarch = function( returns, demean = 1, eps = 0, df = 500 )
+{
+    if (eps < 0)
+    { 
+      error("eps must be a (small) positive number")
+    }
+
+    # Initialization
+     T = nrow( returns );
+    N = ncol( returns );
+    if ( 1 == demean )
+    {
+       mu = matrix(apply( returns, 2, mean ));
+       returns = returns-mu[ array(1,T ), ];
+    }
+    S = t(returns) %*% returns / ( T - 1 );
+    x = t(returns);
+
+    A = matrix( 0, N, N );
+    B = matrix( 0, N, N );
+    C = matrix( 0, N, N );
+
+    # Rescale Data
+    scale = sqrt( matrix( apply( t(x)^2, 2, mean ) ) );
+    x = x / scale[ , array( 1, T ) ];
+
+    # Estimation of On-Diagonal Elements
+    h = matrix( 0, N, T );
+     for( i in 1 : N )
+    {
+        # Likelihood Maximization
+        q = garch1f4( t( x[ i, ] ), eps, df)$q;
+        A[ i, i ] = q[ 2 ];
+        B[ i, i ] = q[ 3 ];
+        C[ i, i ] = q[ 1 ];
+        h[ i, ]   = filter( cbind( 0, q[ 2] ), cbind( 1, -q[ 3 ] ), x[ i, ]^2 * (df-2) / df, mean( x[ i, ] ^ 2) * ( df-2 ) / df ) +
+                    filter( cbind( 0, q[ 1 ] ), cbind( 1, -q[ 3 ] ), matrix( 1, 1, T) );
+    }
+
+    # First-step Estimation of Off-Diagonal Elements
+    for( i in 1 : (N-1) )
+    {
+        for( j in (i +1)  : N )
+        {
+            # Likelihood Maximization
+            theta = garch2f8( x[ i, ] * x[ j, ], C[ i, i ], A[ i, i ], B[ i, i ], x[ i,  ]^2, h[ i, ], C[ j, j ], A[ j, j ], B[ j, j ], x[ j, ]^2, h[ j, ], df );
+            A[ i, j ] = theta[ 2 ];
+            B[ i, j ] = theta[ 3 ];
+            C[ i, j ] = theta[ 1 ];
+            A[ j, i ] = A[ i, j ];
+            B[ j, i ] = B[ i, j ];
+            C[ j, i ] = C[ i, j ];
+        }
+    }
+
+    theta = garch2f8( x[ N, ] * x[ N, ], C[ N, N ], A[ N, N ], B[ N, N ], x[ N,  ]^2, h[ N, ], C[ N, N ], A[ N, N ], B[ N, N ], x[ N, ]^2, h[ N, ], df );
+    A[ N, N ] = theta[ 2 ];
+    B[ N, N ] = theta[ 3 ];
+    C[ N, N ] = theta[ 1 ];
+    A[ N, N ] = A[ N, N ];
+    B[ N, N ] = B[ N, N ];
+    C[ N, N ] = C[ N, N ];
+
+
+    # Transformation of Coefficient Matrices
+    ATMF = minfro( A );
+    BTMF = minfro( B );
+    CTMF = minfro( C /( 1 - B )) * ( 1 - BTMF );
+
+    # Rescale
+    #C = C .* (scale * scale');
+    CTMF = CTMF * ( scale %*% t(scale) );
+
+    # Forecast of Conditional Covariance Matrix
+    Hhat = matrix( 0, N, N );
+    for( i in 1 : N )
+    {
+        for( j in 1 : N )
+        {
+            hSeries = filter( cbind( 0, ATMF[ i, j ] ), cbind( 1, -BTMF[ i, j ] ), returns[ , i ] * returns[ , j ], S[ i, j ] ) +
+                      filter( cbind( 0, CTMF[ i, j ] ), cbind( 1, -BTMF[ i, j ] ), matrix( 1, T,1));
+            Hhat[ i, j ] = hSeries[ T ];
+        }
+    }
+
+    mu = matrix( mu );
+
+    return( list( mu = mu, ATMF = ATMF, BTMF = BTMF, CTMF = CTMF, Hhat = Hhat ) );
+}
+
+#' Fit a GARCH(1,1) model with student-t errors
+#'
+#'  @param    x     : [vector] (T x 1) data generated by a GARCH(1,1) process
+#'  
+#'  @return   q     : [vector] (4 x 1) parameters of the GARCH(1,1) process
+#'  @return   qerr  : [vector] (4 x 1) standard error of parameter estimates
+#'  @return   hf    : [scalar] current conditional heteroskedasticity estimate
+#'  @return   hferr : [scalar] standard error on hf
+#'
+#' @note 
+#'  MATLAB's script initially written by Olivier Ledoit, 4/28/1997
+#'  Uses a conditional t-distribution with fixed degrees of freedom
+#'  Difference with garch1f: errors come from the score alone
+#' 
+#' @references
+#' \url{http://symmys.com/node/170}
+#' See Meucci's script for "FitMultivariateGarch.m"
+#'
+#' @author Xavier Valls \email{flamejat@@gmail.com}
+#' @export
+
+
+garch1f4 = function( x, eps, df )
+{
+    # Parameters
+    gold = ( 1 + sqrt(5) ) / 2; # step size increment 
+    tol1 = 1e-7;        # for termination criterion
+    tol2 = 1e-7;        # for closeness to boundary
+    big  = 2;           # for making the hessian negative definite
+    maxiter = 50;       # maximum number of iterations
+    n = 30;             # number of points on the grid
+
+    # Rescale
+    y = ( array(x) - mean( array(x)))^2;
+    t = length(y);
+    scale = sqrt(mean( y^2 ));
+    y = y / scale;
+    s = mean(y);
+
+    # Grid search
+    #[ag, bg] = meshgrid( seq( 0, 1 - eps, length = n ), seq( 0, 1-eps, length = n ) );
+    ag = outer(seq( 0, 1 - eps, length = n )*0, seq( 0, 1-eps, length = n ), FUN = "+" );
+    bg = outer(seq( 0, 1 - eps, length = n ) , seq( 0, 1-eps, length = n ) * 0, FUN = "+" )
+    cg = pmax( s * (1-ag-bg), 0 );
+    likeg = matrix( -Inf, n, n );
+
+for( i in 1 : n )
+    {
+        for( j in 1 : (n-i+1) )
+        {
+            h = filter( cbind( 0, ag[ i, j ] ), cbind( 1, -bg[ i, j] ), y * ( df - 2 ) / df, s * ( df - 2 )/ df ) +  filter(cbind( 0, cg[ i, j ]),cbind( 1, - bg[ i, j ] ),matrix(1, t, 1));
+            likeg[ i, j ] = -sum( log(h) + (df+1) * log( 1 + y /h /df) );
+        }
+    }
+    maxlikeg = max(likeg, na.rm = TRUE);
+    maxima = which(likeg == maxlikeg); ##ok<MXFND>
+    maximum = max( maxima );
+
+    a = cbind(cg[ maximum ], ag[ maximum], bg[ maximum ]);
+    best   = 0;
+    da     = 0;
+    #term   = 1;
+    #negdef = 0;
+    iter   = 0;
+    while( iter < maxiter )
+    {
+        iter = iter + 1;
+        
+        # New parameter1
+        a = a + gold^best * da;
+        
+        # Conditional variance
+        h = filter(cbind( 0, a[ 2 ] ),cbind( 1, -a[ 3 ] ), y * ( df - 2 ) / df, s * (df-2) / df ) + filter(cbind( 0, a[ 1 ] ),cbind( 1, -a[ 3 ] ), matrix(1, t, 1 ) );
+        
+        # Likelihood
+        if( any( a<0 )||( (a[ 2 ] + a[ 3 ]) > (1 - eps) ))
+        {
+            like = -Inf;
+        } else
+        {
+            like = -sum( log( h )+( df + 1 ) * log( 1 + y /h /df) );
+        }
+
+        GG = cbind( filter(cbind(0, 1),cbind( 1, -a[ 3] ), matrix( 1, t, 1 )),
+              filter(cbind( 0, 1 ),cbind( 1, -a[3] ), y * (df-2) / df ),
+              filter(cbind( 0, 1 ),cbind( 1, -a[ 3 ]), h ));
+        colnames(GG)<-NULL;
+        g1  = ( ( df+1 ) * ( y / ( y + df * h ) ) - 1 ) / h;
+        G   = GG * (g1 %*% matrix( 1, 1, 3));
+        gra = apply( G, 2, sum );
+        
+        # Hessian
+        GG2 = GG[ ,c( 1, 2, 3, 1, 2, 3, 1, 2, 3)] * GG [, c( 1, 1, 1, 2, 2, 2, 3, 3, 3 ) ];
+        g2  = -((df+1) * ( y /( y + df*h) ) - 1 ) / h^2 - ( df * ( df + 1 ) )*(y / ( y+df * h ) ^2 / h );
+        HH  = matrix( 0, t, 9);
+        HH[ , 3 ] = filter(cbind(0, 1), cbind(1, -a[ 3 ] ),GG[ ,1 ] );
+        HH[ , 7 ] = HH[ , 3 ];
+        HH[ , 6 ] = filter(cbind(0, 1), cbind(1, -a[ 3 ] ),GG[ , 2 ] );
+        HH[ , 8 ] = HH[ ,6 ];
+        HH[ , 9 ] = filter(cbind(0, 1), cbind(1, -a[ 3 ] ),GG[ , 3 ] );
+        H   = GG2 * (g2 %*% matrix( 1, 1, 9)) + HH * (g1 %*% matrix( 1, 1, 9)) ;
+        hes = matrix( apply( H, 2, sum), 3, 3 );
+
+        e = eigen(hes);
+            
+        if( any(e$values > 0) )
+        {
+            negdef = 0;
+            d = min( e$values, max( e$values[ e$values<0 ] ) / big);
+            hes = e$vectors %*% diag(d, length(d)) %*% t(e$vectors);
+        } else
+        {
+            negdef = 1;
+        }
+        
+        # Direction
+         da = -gra %*% solve( hes );
+        
+        # Termination criterion
+        term = (da %*% gra)[1];
+        if ((term < tol1) && negdef)
+        {
+            break;
+        }
+        
+        best = 0;
+        newa = a + gold^(best-1) * da;
+        if( any(newa < 0 ) || ( newa[ 2 ] + newa[ 3 ] > (1-eps) ) )
+        {
+            left = -Inf;
+        } else
+        {
+            h = filter( cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * ( df - 2 )/ df ) +  filter(cbind( 0, newa[ 1 ]),cbind( 1, - newa[ 3 ] ),matrix(1, t, 1));
+            
+            left = -sum( log(h) + (df+1) * log( 1 + y /h /df) )    
+        }
+        
+        newa = a + gold^best * da;
+        
+        if( any(newa < 0 ) || ( newa[ 2 ] + newa[ 3 ] > (1-eps) ) )
+        {
+            center = -Inf;
+        } else
+        {
+            h = filter( cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * ( df - 2 )/ df ) +  filter(cbind( 0, newa[ 1 ]),cbind( 1, - newa[ 3 ] ),matrix(1, t, 1));
+            center = -sum( log(h) + (df+1) * log( 1 + y /h /df) )
+        }
+        
+        newa = a + gold^( best+1 ) * da;
+
+        if( any(newa < 0 ) || ( newa[ 2 ] + newa[ 3 ] > (1-eps) ) )
+        {
+            right=-Inf;
+        } else
+        {
+            h = filter( cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * ( df - 2 )/ df ) +  filter(cbind( 0, newa[ 1 ]),cbind( 1, - newa[ 3 ] ),matrix(1, t, 1));
+            right = -sum( log(h) + (df+1) * log( 1 + y /h /df) )
+        }
+        if( all(like > c( left, center, right)) || all( left > c( center, right )) )
+        {
+            while( 1 )
+            {
+                best = best-1;
+                center = left;
+                newa = a + gold^( best-1 ) * da;
+                if( any(newa < 0 ) || ( newa[ 2 ] + newa[ 3 ] > (1-eps) ) )
+                {
+                  left = -Inf;
+                } else
+                {
+                    h = filter( cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * ( df - 2 )/ df ) +  filter(cbind( 0, newa[ 1 ]),cbind( 1, - newa[ 3 ] ),matrix(1, t, 1));
+                    
+                    left = -sum( log(h) + (df+1) * log( 1 + y /h /df) )    
+                }
+                if( all( center >= c( like, left ) ) )
+                {
+                    break;
+                }
+            }
+        }else
+        {
+            if( all( right > c( left, center ) ) )
+            {
+                while( 1 )
+                {
+                    best = best+1;
+                    center = right;
+                    newa = a + gold^(best+1) *da;
+                    if( any(newa < 0 ) || ( newa[ 2 ] + newa[ 3 ] > (1-eps) ) )
+                    {
+                        right = -Inf;
+                    } else
+                    {
+                        h = filter( cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * ( df - 2 )/ df ) +  filter(cbind( 0, newa[ 1 ]),cbind( 1, - newa[ 3 ] ),matrix(1, t, 1));               
+                        right = -sum( log(h) + (df+1) * log( 1 + y /h /df) )    
+                    }
+                    if( center > right )
+                    {
+                        break;
+                    }
+                }
+            }
+        }
+
+          # If stuck at boundary then stop
+        if( ( center == like ) && ( any(a<tol2) || ( ( a[ 2 ] + a[ 3 ] ) > (1-tol2) ) ) )
+        {
+            break;
+        }
+        
+        # end of optimization loop
+    }
+
+    if( length(a[ a < tol2 ]) )
+    {
+        a[ a < tol2 ] = matrix( 0, 1, length(a[ a<tol2 ]));
+    }
+
+    if( ( a[ 2 ] + a[ 3 ] )> ( 1-tol2 ) )
+    {
+        if( a[ 2 ]< ( 1 - tol2 ) )
+        {
+            a[ 2 ] = a[ 2 ] + (1 - a[ 2 ]-a[ 3 ] );
+        }else
+        {
+            a[ 3 ] = a[ 3 ] + ( 1 - a[ 2 ]- a[ 3 ] );
+        }
+    }
+
+    # Estimation error and volatility forecast
+    #aerr=solve(t(G)%*%G);
+    tmp   = ( t(G) %*% G );
+    aerr  = tmp %*% solve(diag( 1, dim(tmp)));
+    hf    = a[ 1 ] + a[ 2 ] * y[ t ] * (df-2) / df + a[ 3 ] * h[ t ];
+    gf    = c( 1,  y[ t ], h[ t ] ) + a[ 3 ] * GG[ t, ];
+    hferr = gf %*% aerr %*% gf;
+    aerr  = t( diag(aerr) );
+
+    # Revert to original scale
+    a[ 1 ]    = a[ 1 ] * scale;
+    aerr[ 1 ] = aerr[ 1 ] * scale^2;
+    hf        = hf* scale;
+    hferr     = hferr * scale^2;
+
+    aerr  = sqrt(aerr);
+    hferr = sqrt(hferr);
+    q     = a;
+    qerr  = aerr;
+
+    return( list( q = q, qerr = qerr, hf = hf, hferr = hferr ) );
+}
+
+#' Off-diagonal parameter estimation in bivariate GARCH(1,1) when diagonal parameters are given.
+#'
+#'  @param    x     : [vector] (T x 1) data generated by a GARCH(1,1) process
+#'  
+#'  @return   q     : [vector] (4 x 1) parameters of the GARCH(1,1) process
+#'  @return   qerr  : [vector] (4 x 1) standard error of parameter estimates
+#'  @return   hf    : [scalar] current conditional heteroskedasticity estimate
+#'  @return   hferr : [scalar] standard error on hf
+#'
+#' @note 
+#'  Initially written by Olivier Ledoit, 4/28/1997
+#'  Uses a conditional t-distribution with fixed degrees of freedom
+#'  Steepest Ascent on boundary, Hessian off boundary, no grid search
+#' 
+#' @references
+#' \url{http://symmys.com/node/170}
+#' See Meucci's script for "FitMultivariateGarch.m"
+#'
+#' @author Xavier Valls \email{flamejat@@gmail.com}
+#' @export
+
+garch2f8 = function( y, c1, a1, b1, y1, h1, c2, a2, b2, y2, h2, df )
+{   
+    # Parameters
+    gold    = ( 1 + sqrt( 5 ) ) / 2;    # step size increment
+    tol1    = 1e-7;         # for termination criterion
+    tol2    = 1e-7;     # for closeness to boundary
+    big     = 2;            # for making the hessian negative definite
+    maxiter = 50;       # maximum number of iterations
+    #n=30;          # number of points on the grid
+
+    # Prepare
+    t  = length( y );
+    y1 = array( y1 );
+    y2 = array( y2 );
+    y  = array( y  );
+    s  = mean( y );
+    #s1 = mean(y1);
+    #s2 = mean(y2);
+    h1 = array( h1 );
+    h2 = array( h2 );
+
+    # Bounds
+    low  = c(-sqrt(c1*c2), 0, 0 ) + tol2;
+    high = c( sqrt(c1*c2), sqrt(a1*a2), sqrt(b1*b2) ) - tol2;
+
+    # Starting Point
+    a0 = 0.9 * sqrt( a1 * a2 );
+    b0 = 0.9 * sqrt( b1 * b2 );
+    c0 = mean( y ) * ( 1 - a0 - b0 ) %*% ( df - 2 )/df;
+    c0 = sign( c0 ) * min( abs( c0 ), 0.9 * sqrt( c1*c2 ));
+
+    # Initialize optimization
+    a    = cbind( c0, a0, b0 );
+    best = 0;
+    da   = 0;
+    #term=1;
+    #negdef=0;
+    iter = 0;
+    while( iter < maxiter )
+    {
+        iter = iter + 1;
+        
+        # New parameter1
+        a = a + gold^best * da;
+        
+        # Conditional variance
+        h = filter(cbind( 0, a[ 2 ] ),cbind( 1, -a[ 3 ] ), y * ( df - 2 ) / df, s * (df-2) / df ) + filter(cbind( 0, a[ 1 ] ),cbind( 1, -a[ 3 ] ), matrix(1, t, 1 ) );
+        d = h1 * h2 - h^2;
+        z = h2 * y1 + h1 * y2 - 2 * h * y;
+        
+        # Likelihood
+        if( any( a<low )|| any(a>high) )
+        {
+            like = -Inf;
+        } else
+        {
+            if( any( d <= 0 ) || any( 1+ z / d / df <= 0 ) )
+            {
+                like = -Inf;
+            } else
+            {
+                #OJO
+                like = -sum( log( d ) + ( 2 + df ) * log( 1 + z / d / df) /2 );
+            }
+        }
+
+        # Gradient
+        GG = cbind( filter(cbind(0, 1),cbind( 1, -a[ 3] ), matrix( 1, t, 1 )),
+              filter(cbind( 0, 1 ),cbind( 1, -a[3] ), y * (df-2) / df ),
+              filter(cbind( 0, 1 ),cbind( 1, -a[ 3 ]), h ));
+        colnames(GG)<-NULL;
+        g1  = ( ( df+1 ) * ( y / ( y + df * h ) ) - 1 ) / h;
+        #OJO
+        g1  = h / d + ( 2 + df ) * y / ( z + d * df) - ( 2 + df ) * h * z /( z + d * df ) / d;
+        G   = GG * ( g1 %*% matrix( 1, 1, 3) );
+        gra = apply( G, 2, sum );
+
+        # Hessian
+        g2 = 1 / d + 2 * h^2 / d^2 - (2 + df) * y / (z + d * df)^2 * ( -2 * y - 2 * df * h) -
+            (2 + df) * z / ( z + d * df) / d + 2 * (2 + df) * h * y / ( z + d * df ) / d +
+            (2 + df) * h * z / (z + d * df)^2  / d *( -2 * y - 2 * df * h ) -
+            2 * ( 2 + df ) * h^2 * z / ( z + d * df ) / d ^2;
+
+        GG2 = GG[ ,c( 1, 2, 3, 1, 2, 3, 1, 2, 3)] * GG [, c( 1, 1, 1, 2, 2, 2, 3, 3, 3 ) ];
+        HH  = matrix( 0, t, 9);
+        HH[ , 3 ] = filter(cbind(0, 1), cbind(1, -a[ 3 ] ),GG[ , 1 ] );
+        HH[ , 7 ] = HH[ , 3 ];
+        HH[ , 6 ] = filter(cbind(0, 1), cbind(1, -a[ 3 ] ),GG[ , 2 ] );
+        HH[ , 8 ] = HH[ ,6 ];
+        HH[ , 9 ] = filter(cbind(0, 1), cbind(1, -a[ 3 ] ),GG[ , 3 ] );
+        H   = GG2 * (g2 %*% matrix( 1, 1, 9)) + HH * (g1 %*% matrix( 1, 1, 9)) ;
+        hes = matrix( apply( H, 2, sum), 3, 3 );
+
+        e = eigen(hes);
+        if( all(e$values>0) )
+        {
+            hes    = -diag( 1, 3 );
+            negdef = 0;
+        }else
+        {
+            if( any(e$values > 0) )
+            {
+                negdef = 0;
+                val    = pmin( e$values, max( e$values[ e$values<0 ] ) / big);
+                hes    = e$vectors %*% diag( val, length(val)) %*% t(e$vectors);
+                
+            } else
+            {
+                negdef = 1;
+            }
+        }
+
+        # Steepest Ascent or Newton
+        if( any( c(a==low, a==high) ) )
+        {
+            da = -((gra %*% t(gra))/(gra %*% hes %*% t(gra))) %*% gra;
+        }else
+        {
+            da = -gra %*% solve(hes);
+        }
+
+        # Termination criterion
+        term = (da %*% gra)[1] ;
+        if( ( term < tol1 ) && negdef )
+        {
+            break;
+        }
+        
+        # If you are on the boundary and want to get out, slide along
+        if( length(da[( a==low )  & (da<0) ]))
+        {
+            da[( a==low )  & (da<0) ] = matrix( 0, dim(da[ ( a==low )  & ( da<0 ) ]));
+        }
+        if( length( da[( a==high ) & (da>0) ]  ) )
+        {
+            da[( a==high ) & (da>0) ] = matrix( 0, dim(da[ ( a==high ) & ( da>0 ) ]));
+        }
+        # If you are stuck in a corner, terminate too
+        if( all(da==0) )
+        {
+            break;
+        }
+        
+        # Go no further than next boundary
+        hit = cbind((  low[ da != 0 ] - a[ da != 0 ] ) / da[ da != 0 ],
+                    ( high[ da != 0 ] - a[ da != 0 ] ) / da[ da != 0 ]);
+        if( length(hit[hit <= 0])) hit[ hit <= 0 ] = 0;
+        da  = min(cbind(hit, 1)) * da;
+        
+        # Step search
+        best = 0;
+        newa = a + gold^( best - 1 ) * da;
+        if( any( newa < low ) || any(newa > high) )
+        {
+            left = -Inf;
+        }else
+        {
+            h = filter(cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * (df-2) / df ) +
+                filter(cbind( 0, newa[ 1 ] ), cbind( 1, -newa[ 3 ] ), matrix(1, t, 1 ) );
+            d = h1 * h2 - h^2;
+            z = h2 * y1 + h1 * y2 - 2 * h * y;
+            if( any( d <= 0 ) || any( 1+ z / d / df <= 0 ) )
+            {
+                left = -Inf;
+            } else
+            {
+                left = -sum( log( d ) + ( 2 + df ) * log( 1 + z / d / df) /2 );
+            }
+        }
+
+        newa = a + gold^( best ) * da;
+        if( any( newa < low ) || any(newa > high) )
+        {
+            center = -Inf;
+        }else
+        {
+            h = filter(cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * (df-2) / df ) +
+                filter(cbind( 0, newa[ 1 ] ), cbind( 1, -newa[ 3 ] ), matrix(1, t, 1 ) );
+            d = h1 * h2 - h^2;
+            z = h2 * y1 + h1 * y2 - 2 * h * y;
+            if( any( d <= 0 ) || any( 1+ z / d / df <= 0 ) )
+            {
+                center = -Inf;
+            } else
+            {
+                center = -sum( log( d ) + ( 2 + df ) * log( 1 + z / d / df) /2 );
+            }
+        }
+
+        newa = a + gold^( best + 1 ) * da;
+        if( any( newa < low ) || any(newa > high) )
+        {
+            right = -Inf;
+        }else
+        {
+            h = filter(cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * (df-2) / df ) +
+                filter(cbind( 0, newa[ 1 ] ), cbind( 1, -newa[ 3 ] ), matrix(1, t, 1 ) );
+            d = h1 * h2 - h^2;
+            z = h2 * y1 + h1 * y2 - 2 * h * y;
+            if( any( d <= 0 ) || any( 1+ z / d / df <= 0 ) )
+            {
+                right = -Inf;
+            } else
+            {
+                right = -sum( log( d ) + ( 2 + df ) * log( 1 + z / d / df) /2 );
+            }
+        }
+
+        if( all( like > c( left, center, right ) )|| all( left > c( center, right ) ))
+        {
+            while( 1 )
+            {
+                best = best - 1;
+                center = left;
+                newa = a + gold^( best - 1 ) * da;
+                if( any( newa < low ) || any(newa > high) )
+                {
+                    left = -Inf;
+                }else
+                {
+                    h = filter(cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * (df-2) / df ) +
+                        filter(cbind( 0, newa[ 1 ] ), cbind( 1, -newa[ 3 ] ), matrix(1, t, 1 ) );
+                    d = h1 * h2 - h^2;
+                    z = h2 * y1 + h1 * y2 - 2 * h * y;
+                    if( any( d <= 0 ) || any( 1+ z / d / df <= 0 ) )
+                    {
+                        left = -Inf;
+                    } else
+                    {
+                        left = -sum( log( d ) + ( 2 + df ) * log( 1 + z / d / df) /2 );
+                    }
+                }
+
+                if( all(center >= c( like, left ) ) )
+                {
+                    break;
+                }
+            }
+        }else
+        {
+            if( all( right > c( left, center ) ) )
+            {
+                best = best + 1;
+                center = right;
+                newa = a + gold^( best + 1 ) * da;
+                while( 1 )
+                {
+                    if( any( newa < low ) || any(newa > high) )
+                    {
+                        right = -Inf;
+                    }else
+                    {
+                        h = filter(cbind( 0, newa[ 2 ] ), cbind( 1, -newa[ 3 ] ), y * ( df - 2 ) / df, s * (df-2) / df ) +
+                            filter(cbind( 0, newa[ 1 ] ), cbind( 1, -newa[ 3 ] ), matrix(1, t, 1 ) );
+                        d = h1 * h2 - h^2;
+                        z = h2 * y1 + h1 * y2 - 2 * h * y;
+                        if( any( d <= 0 ) || any( 1+ z / d / df <= 0 ) )
+                        {
+                            right = -Inf;
+                        } else
+                        {
+                            right = -sum( log( d ) + ( 2 + df ) * log( 1 + z / d / df) /2 );
+                        }
+                    } 
+                    
+                    if( center > right )
+                    {
+                        break;
+                    }      
+                }
+            }
+        }
+    }
+    
+    q = a;
+
+    return( q );
+}    
+
+# 
+#  @param    A   : [matrix] an indefinite symmetric matrix with non-negative diagonal elements
+#  
+#  @return   XXX : [matrix] positive semi-definite matrix with same diagonal elements as A that is closest
+#                           to A according to the Frobenius norm
+#
+# @note Written initially by Ilya Sharapov (1997)
+#
+# @references
+# \url{http://symmys.com/node/170}
+# See Meucci's script for "FitMultivariateGarch.m"
+#
+# @author Xavier Valls \email{flamejat@@gmail.com}
+
+minfro = function( A )
+{
+    if( any( diag( A ) < 0) )
+    {
+        stop("Diagonal Elements Must Be Non-Negative!");
+    }else if( any( any( A != t(A) ) ) )
+    {
+        stop("Matrix Must Be Symmetric!");
+    }else if( all( eigen(A)$values >= 0 ) )
+    {
+        XXX = A;
+    }else
+    {    
+        # if things go wrong make rho bigger and wait longer
+        rho  = 0.75;
+        tol  = 3e-6; # tolerance
+        maxj = 10;   # max number of iterations
+
+        n = nrow( A );
+        M = diag( diag(A), ncol(A) );   # initialize with diagonal        
+        oldnorm    = norm( M - A , "F" );
+        oldnormj   = oldnorm;
+        normj[ 1 ] = oldnorm;
+
+        j = 1;
+        incmax = 1e32;      # just to enter the loop
+        while((j<maxj) && (incmax>tol))
+        {
+            incmax = 0;
+            for( i in 1 : n ) 
+            {
+                a   = rbind( A[ 1:i-1, i ], A[ i+1:n, i ] );
+                m   = rbind( M[ 1:i-1, i ], M[ i+1:n, i ] );
+                aii = A[ i, i ];
+                b   = a - rho * m;
+                # Newton's step
+                x = newton( M, i, b, m, aii, n, rho );  
+                P = sparse( diag( 1, n ) );
+                P[ i, 1:n ] = t(x); 
+                # update
+                Mtest   = P %*% M %*% t(P);         
+                M       = Mtest;
+                inc     = oldnorm - norm( M - A, "F" );
+                oldnorm = norm( M - A, "F" );
+                # find maximal increment over iteration
+                if( inc > incmax )
+                {
+                     incmax = inc;  
+                }           
+            }
+
+            normj[ j+1 ] = oldnorm; ##ok<AGROW>
+            incj[ j ]    = oldnormj - oldnorm; 
+            oldnormj     = oldnorm;       
+
+            j = j + 1;
+
+        }    
+        
+        XXX = M;    
+    }
+    
+    return( XXX );
+}
+
+newton = function(M, i, b, m, aii, n, rho)
+{
+    ## Newton setp
+    # Subroutine called interbally by minfro.m
+
+    maxit = 40;
+    eps   = 1e-9; # small correction 
+                
+    # to handle singularity
+    l  = 0.0;
+    MM = rbind( cbind( M[ 1:i-1, 1:i-1 ], M[ 1:i-1, i+1:n ] ), cbind( M[ i+1:n, 1:i-1 ], M[ i+1:n, i+1:n ]) ) + eps * eye( n - 1 );
+
+    j = 1;
+    # loop
+    while( j < maxit )
+    {
+        tmp = MM %*% MM + l * MM;
+        IM = tmp %*% solve(diag( 1, dim( tmp )));
+        #IM = inv(MM*MM+l*MM);
+        x = IM %*% ( MM %*% b - l * rho * m);
+        f = rho * rho * aii + 2 * rho * t(x) %*% m + t(x) %*% MM %*% x - aii;
+        
+        if( abs(f) < 1e-7 )
+        {
+            break;            
+        }
+        
+        dfdl = -2 * t( rho * m + MM %*% x ) %*% IM %*% ( rho * m + MM %*% x );
+        # Newton's step
+        l = l - f / dfdl;             
+        j = j + 1;
+    }
+
+    if( abs(f) < 1e-7 )
+    {
+        # converged
+        xx = rbind( x[ 1:i-1 ], rho,  x[ i:n-1 ]);
+    }else
+    {                   
+        # didn't converge
+        xx = matrix( 0, n, 1 ); 
+        xx[ i ] = 1;            
+    }
+
+    return( xx );
+}
+
+
+
+

Added: pkg/Meucci/R/MvnRnd.R
===================================================================
--- pkg/Meucci/R/MvnRnd.R	                        (rev 0)
+++ pkg/Meucci/R/MvnRnd.R	2013-07-28 11:28:31 UTC (rev 2654)
@@ -0,0 +1,44 @@
+#' Generates normal simulations whose sample moments match the population moments
+#'
+#' Adapted from file 'MvnRnd.m'. Most recent version of article and code available at http://www.symmys.com/node/162
+#' see A. Meucci - "Simulations with Exact Means and Covariances", Risk, July 2009
+#'
+#' @param M         a numeric indicating the sample first moment of the distribution
+#' @param S         a covariance matrix
+#' @param J         a numeric indicating the number of trials
+#'
+#' @author Ram Ahluwalia \email{rahluwalia@@gmail.com}
+#' @references 
+#' \url{http://www.symmys.com}
+#' TODO: Add Schur decomposition. Right now function is only sampling from mvrnorm so sample moments do no match population moments
+#' I have sample code commented out below to implement this correctly but I require a function that returns the unitaryMatrix from a Schur decomposition
+#' @export
+MvnRnd = function( M , S , J )
+{
+  library(MASS)
+  X = MASS::mvrnorm( n = J , mu = M , Sigma = S ) # Todo: need to swap with Meucci function and Schur method
+  return( X = X )
+    
+    # # compute sample covariance: NOTE defined as "cov(Y,1)", not as "cov(Y)"
+    # S_ = cov( Y , 1 )
+    # 
+    # # solve Riccati equation using Schur method
+    #     zerosMatrix = matrix( rep( 0 , length( N * N ) ) , nrow = N )
+    #     # define the Hamiltonian matrix
+    #     H1 = cbind( zerosMatrix , -1*S_ )
+    #     H2 = cbind( -S , zerosMatrix ) 
+    #     H = rbind( H1 , H2 )
+    #     # perform its Schur decomposition. 
[TRUNCATED]

To get the complete diff run:
    svnlook diff /svnroot/returnanalytics -r 2654


More information about the Returnanalytics-commits mailing list