[Depmix-commits] r288 - papers/jss

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Wed Jul 8 17:05:29 CEST 2009


Author: ingmarvisser
Date: 2009-07-08 17:05:29 +0200 (Wed, 08 Jul 2009)
New Revision: 288

Removed:
   papers/jss/robKalman.Rnw
   papers/jss/softwarereview.tex
   papers/jss/toaddlater.tex
Log:
Removed unnecessary files from jss directory.

Deleted: papers/jss/robKalman.Rnw
===================================================================
--- papers/jss/robKalman.Rnw	2009-07-08 15:01:49 UTC (rev 287)
+++ papers/jss/robKalman.Rnw	2009-07-08 15:05:29 UTC (rev 288)
@@ -1,153 +0,0 @@
-\documentclass[article]{jss}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%% declarations for jss.cls %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-\usepackage{geometry}
-\usepackage{color}
-\definecolor{darkblue}{rgb}{0.0,0.0,0.75}
-\definecolor{distrCol}{rgb}{0.0,0.4,0.4}
-\usepackage{amssymb}%
-% -------------------------------------------------------------------------------
-\RequirePackage{listings}
-%\usepackage{Sweave}
-\RequirePackage{ifthen}
-\newboolean{Sweave at gin}
-\setboolean{Sweave at gin}{true}
-\newboolean{Sweave at ae}
-\setboolean{Sweave at ae}{true}
-% -------------------------------------------------------------------------------
-\SweaveOpts{keep.source=TRUE}
-% -------------------------------------------------------------------------------
-<<SweaveListingsPreparations, results=tex, echo=FALSE>>=
-require(SweaveListingUtils)
-SweaveListingPreparations()
-setToBeDefinedPkgs(pkgs = c("distr","distrEx", "distrMod", "RandVar", "ROptEst"),
-                   keywordstyles = "\\bf\\color{distrCol}")
-@
-% -------------------------------------------------------------------------------
-\newcommand{\ttR}[1]{{\color{Rcolor}\tt #1}}
-\newcommand{\SSs}{\scriptscriptsize}
-\newcommand{\R}{\mathbb R}
-% -------------------------------------------------------------------------------
-%% almost as usual
-\author{Peter Ruckdeschel\\Fraunhofer ITWM Kaiserslautern\And
-        Bernhard Spangl\\BOKU Wien}
-\title{\proglang{R} Package~\pkg{robKalman}: Routines for Robust Kalman Filtering}
-
-%% for pretty printing and a nice hypersummary also set:
-\Plainauthor{Peter Ruckdeschel, Bernhard Spangl} %% comma-separated
-\Plaintitle{R Package robKalman: Routines for Robust Kalman Filtering} %% without formatting
-\Shorttitle{R Package robKalman} %% a short title (if necessary)
-
-
-%% an abstract and keywords
-\Abstract{
-  Package~\pkg{robKalman} provides ...
-}
-\Keywords{Kalman filtering, \proglang{S4} classes, Robustness}
-\Plainkeywords{Kalman filtering, S4 classes, Robustness} %% without formatting
-%% at least one keyword must be supplied
-%------------------------------------------------------------------------------
-
-%% publication information
-%% NOTE: Typically, this can be left commented and will be filled out by the technical editor
-%% \Volume{13}
-%% \Issue{9}
-%% \Month{September}
-%% \Year{2004}
-%% \Submitdate{2004-09-29}
-%% \Acceptdate{2004-09-29}
-
-%% The address of (at least) one author should be given
-%% in the following format:
-\Address{
-  Peter Ruckdeschel\\
-  Fraunhofer-Institut f\"ur\\
-  Techno-und Wirtschaftsmathematik\\
-  Fraunhofer-Platz 1\\
-  67663 Kaiserslautern, Germany\\
-  E-mail: \email{Peter.Ruckdeschel at itwm.fraunhofer.de}\\
-  Bernhard Spangl\\
-  --- bitte eintragen ---
-  \bigskip \\
-}
-%% It is also possible to add a telephone and fax number
-%% before the e-mail in the following format:
-%% Telephone: +43/1/31336-5053
-%% Fax: +43/1/31336-734
-
-%% for those who use Sweave please include the following line (with % symbols):
-%% need no \usepackage{Sweave.sty}
-
-%% end of declarations %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-%------------------------------------------------------------------------------
-\begin{document}
-% -----------------------------------------------------------------------------
-\section{Introduction}
-% -----------------------------------------------------------------------------
-*welche Verfahren
-*was gibt es schon an (nicht notw. robust) KF in CRAN und 
- Software Kalman ähnliche Stoffer/Shumway
-*Erläuterung der Gliederung der Arbeit
-http://www.cs.unc.edu/~welch/kalman/
-http://www.cs.ubc.ca/~murphyk/Software/Kalman/kalman.html
-
-\section{Contents}
-
--classKalman filter (nur Referenzen + correction step)
--ACM filter (nur Referenzen + correction step)
--rLS filter (nur Referenzen + correction step)
-[-rLS.IO filter]
-[-rLS.IOAO filter]
-[-rIC filter]
-[-m - *** filter]
-
-* utilty infrastructure
-  -> elementare Simulation (to be enhanced)
-  [-> spezielle plot methoden]
-  [-> spezielle summary, print methoden]
-
-\section{Implementation Concept}
-
-*Interfaces
- to other packages  
-    sspir robfilter dse dynlm dyn???
- [to other software -> DEBI]
-
-*User Interface = robKalman + Argumentstruktur
-
-*rekursivitäts-struktur
-
-*time stamp management
- 
-*Layer konzept
-*Klassen Konzept
- + SSM
- + methoden Klassen (inklusive control)
- + output Klassen
-
-
-\section{Examples}
- / Orientierung an Demos
-* real world 
-* Simulation
--> Grafiken : Boxplots + exemplarische Pfade
--> Tabellen:  empirische MSE
-
-
-\section{Availability}
-
-\section{Planned Extensions}
-*Smoother
-*EM-Algo
-%------------------------------------------------------------------------------
-\section*{Acknowledgments}
-%------------------------------------------------------------------------------
-%------------------------------------------------------------------------------
-\bibliography{distrMod}
-%------------------------------------------------------------------------------
-\end{document}
-%------------------------------------------------------------------------------

Deleted: papers/jss/softwarereview.tex
===================================================================
--- papers/jss/softwarereview.tex	2009-07-08 15:01:49 UTC (rev 287)
+++ papers/jss/softwarereview.tex	2009-07-08 15:05:29 UTC (rev 288)
@@ -1,47 +0,0 @@
-\documentclass[softwarereview]{jss}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%% declarations for jss.cls %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-%% reviewer
-\Reviewer{Antony Unwin\\University of Augsburg}
-\Plainreviewer{Antony Unwin}
-
-%% about the software
-\Softwaretitle{\pkg{Aabel} 1.5.7}
-\Plaintitle{Aabel 1.5.7}
-%% if different from \Softwaretitle also set
-%% \Shorttitle{Aabel 1.5.7}
-\Publisher{Gigawiz Ltd.\ Co.}
-\Pubaddress{Tulsa, OK}
-\Price{USD~349 (standard), USD~249 (academic)}
-\URL{http://www.gigawiz.com/}
-
-%% publication information
-%% NOTE: Typically, this can be left commented and will be filled out by the technical editor
-%% \Volume{11}
-%% \Issue{1}
-%% \Month{July}
-%% \Year{2004}
-%% \Submitdate{2004-07-20}
-
-%% address of reviewer
-\Address{
-  Antony Unwin\\
-  University of Augsburg\\
-  Department of Computer Oriented Statistics and Data Analysis\\
-  D-86135 Augsburg, Germany\\
-  E-mail: \email{Antony.Unwin at math.uni-augsburg.de}\\
-  URL: \url{http://www1.math.uni-augsburg.de/~unwin/}
-}
-
-%% end of declarations %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
-\begin{document}
-
-%% include the review as usual
-%% Note that you should use the \pkg{}, \proglang{} and \code{} commands.
-
-\end{document}

Deleted: papers/jss/toaddlater.tex
===================================================================
--- papers/jss/toaddlater.tex	2009-07-08 15:01:49 UTC (rev 287)
+++ papers/jss/toaddlater.tex	2009-07-08 15:05:29 UTC (rev 288)
@@ -1,247 +0,0 @@
-
-\subsection{Mixtures of LMMs}
-
-The above forward recursion can readily be generalized to mixture 
-models, in which it is assumed that the data are realizations of a 
-number of different LMMs and the goal is to assign posterior 
-probabilities to sequences of observations. This situation occurs, 
-for example, in learning data where different learning strategies may 
-lead to different answer patterns. From an observed sequence of 
-responses, it may not be immediately clear from which learning 
-process they stem. Hence, it is interesting to consider a mixture of 
-latent Markov models which incorporate restrictions that are 
-consistent with each of the learning strategies. 
-
-To compute the likelihood of a mixture of $K$ models, define the 
-forward recursion variables as follows (these variables now have an 
-extra index $k$ indicating that observation and transition 
-probabilities are from latent model $k$):
-\begin{align}
-\begin{split}
-\phi_{1}(j_{k}) &=  Pr(\vc{O}_{1}, 
-S_{1}=j_{k})=p_{k}\pi_{j_{k}}b_{j_{k}}(\vc{O}_{1}).
-\end{split}\label{eq:fwd1mix} \\
-\begin{split}
-\phi_{t}(j_{k})   &=   Pr(\vc{O}_{t}, S_{t}=j_{k}|\vc{O}_{1}, \ldots, 
-\vc{O}_{t-1}) \\
-			&= \left[ \sum_{k=1}^{K} \sum_{i=1}^{n_{k}} \phi_{t-1}(i_{k}) 
-			a_{ij_{k}}b_{j_{k}}(\vc{O}_{t}) \right] \times (\Phi_{t-1})^{-1},
-\end{split}\label{eq:fwdtmix} 
-\end{align}
-where $\Phi_{t} = \sum_{k=1}^{K}\sum_{i=1}^{n_{k}} \phi_{t}(j_{k})$.
-Note that the double sum over $k$ and $n_{k}$ is simply an enumeration
-of all the states of the model.  Now, because $a_{ij_{k}}=0$ whenever
-$S_{i}$ is not part of component $k$, the sum over $k$ can be dropped
-and hence equation~\ref{eq:fwdtmix} reduces to:
-\begin{equation}
-	\phi_{t}(j_{k}) = \left[ \sum_{i=1}^{n_{k}} \phi_{t-1}(i_{k}) 
-			a_{ij_{k}}b_{j_{k}}(\vc{O}_{t}) \right] \times (\Phi_{t-1})^{-1}
-\end{equation}
-The loglikelihood is computed by applying equation~\ref{eq:logl} on
-these terms.  For multiple cases, the log-likelihood is simply the sum
-over the individual log-likelihoods. 
-
-
-Consider a mixture of two components, one with two states
-and the other with three states.  Using
-equations~(\ref{eq:fwd1}--\ref{eq:fwdt}) to compute the log-likelihood
-of this model one needs $O(Tn^{2})=O(T\times 25)$ computations whereas
-with the mixture equations~(\ref{eq:fwd1mix}--\ref{eq:fwdtmix}),
-$\sum_{n_{i}} O(n_{i}^{2}T)$ computations are needed, in this case
-$O(T \times 13)$.  So, it can be seen that in this easy example the
-computational cost is almost halved.
-
-\section{Gradients}
-
-\newcommand{\fpp}{\frac{\partial} {\partial \lambda_{1}}}
-
-See equations 10--12  in \cite{Lystig2002} for the score recursion 
-functions of the hidden Markov model for a univariate time series. 
-Here the corresponding score recursion for the multivariate mixture 
-case are provided. The $t=1$ components of this score recursion are 
-defined as (for an arbitrary parameter $\lambda_{1}$):
-\begin{align}
-\psi_{1}(j_{k};\lambda_{1}) &:=  \fpp Pr(\vc{O}_{1}|S_{1}=j_{k}) \\
-\begin{split} 
-	&= \left[  \fpp p_{k} \right] \pi_{j_{k}}b_{j_{k}}(\vc{O}_{1}) + 
-	p_{k}\left[ \fpp \pi_{j_{k}} \right] b_{j_{k}}(\vc{O}_{1}) \\
-	& \qquad  + p_{k}\pi_{j_{k}} \left[ \fpp 
-b_{j_{k}}(\vc{O}_{1})\right],
-\end{split} \label{eq:psi1}
-\end{align}
-and for $t>1$ the definition is:
-\begin{align}
-\psi_{t}(j_{k};\lambda_{1})  & =  \frac{\fpp Pr(\vc{O}_{1}, \ldots, 
-\vc{O}_{t}, S_{t}=j_{k})}
-			{Pr(\vc{O}_{1}, \ldots, \vc{O}_{t-1})}  \\
-\begin{split} 
-	& =  
-			 \sum_{i=1}^{n_{k}} \Bigg\{ \psi_{t-1}(i;\lambda_{1})a_{ij_{k}} 
-			 b_{j_{k}}(\vc{O}_{t}) \\ 
-			 &\qquad +\phi_{t-1}(i) \left[ \fpp a_{ij_{k}} \right] b_{j_{k}} 
-(\vc{O}_{t}) \\
-			&\qquad +\phi_{t-1}(i)a_{ij_{k}}  \left[ \fpp b_{j_{k}} 
-(\vc{O}_{t}) \right] \Bigg\} 
-			\times (\Phi_{t-1})^{-1}.
-\end{split} \label{eq:psit}
-\end{align}
-
-Using above equations, \cite{Lystig2002} derive the following equation 
-for the partial derivative of the likelihood:
-\begin{equation}
-	\fpp l_{T}= 	
-		\frac{\mathbf{\Psi}_{T}(\lambda_{1})}{\mathbf{\Phi}_{T}},
-\end{equation}
-where $\Psi_{t}=\sum_{k=1}^{K} \sum_{i=1}^{n_{k}} 
-\psi_{t}(j_{k};\lambda_{1})$. 
-Starting from the equation from the logarithm of the likelihood, this 
-is easily seen to be correct: 
-\begin{eqnarray*}
-	\fpp \log Pr(\vc{O}_{1}, \ldots, \vc{O}_{T}) &=& Pr(\vc{O}_{1}, 
-\ldots, \vc{O}_{T})^{-1} 
-	\fpp Pr(\vc{O}_{1}, \ldots, \vc{O}_{T}) \\
-	&=&  \frac{Pr(\vc{O}_{1}, \ldots, \vc{O}_{T-1})}{Pr(\vc{O}_{1}, 
-\ldots, \vc{O}_{T})}  \Psi_{T} (\lambda_{1}) \\
-	&=&  \frac{\mathbf{\Psi}_{T}(\lambda_{1})}{\mathbf{\Phi}_{T}}.
-\end{eqnarray*}
-
-Further, to actually compute the gradients, the partial derivatives of
-the parameters and observation distribution functions are neccessary,
-i.e., $\fpp p_{k}$, $\fpp \pi_{i}$, $\fpp a_{ij}$, and $\fpp
-\vc{b}_{i}(\vc{O}_{t})$.  Only the latter case requires some
-attention.  We need the following derivatives $\fpp
-\vc{b}_{j}(\vc{O}_{t})=\fpp \vc{b}_{j}(O_{t}^{1}, \ldots, O_{t}^{m})$, for
-arbitrary parameters $\lambda_{1}$. To stress that $\vc{b}_{j}$ is a
-vector of functions, we here used boldface. First note that because of local
-independence we can write:
-\begin{equation*}
-	\fpp \left[ b_{j}(O_{t}^{1}, \ldots, O_{t}^{m}) \right] = \frac{\partial} 
-{\partial \lambda_{1} } \left[ b_{j}(O_{t}^{1}) \right] \times  
-\left[ b_{j}(O_{t}^{2}) \right], \ldots,  \left[ b_{j}(O_{t}^{m}) \right].  
-\end{equation*}
-Applying the chain rule for products we get:
-\begin{equation}
-	\fpp [b_{j}(O_{t}^{1}, \ldots, O_{t}^{m})] =
-	\sum_{l=1}^{m} \left[ \prod_{i=1, \ldots, \hat{l}, \ldots, m} 
-b_{j}(O_{t}^{i}) \right] \times
-	\fpp  [b_{j}(O_{t}^{l})],
-	\label{partialProd}
-\end{equation}
-where $\hat{l}$ means that that term is left out of the product. 
-These latter terms, $\frac{\partial} {\partial \lambda_{1} }  
-[b_{j}(O_{t}^{k})]$, are easy to compute given either multinomial or 
-gaussian observation densities $b_{j}(\dot)$
-
-\subsection{Generating data}
-
-The \code{dmm}-class has a \code{generate} method that can be used to 
-generate data according to a specified model. 
-
-\begin{verbatim}
-gen<-generate(c(100,50),mod)
-\end{verbatim}
-
-
-\section{Multi group/case analysis}
-
-\begin{verbatim}
-conpat=rep(1,15)
-conpat[1]=0
-conpat[8:9]=0
-conpat[14:15]=0
-stv=c(1,0.9,0.1,0.1,0.9,5.5,0.2,0.5,0.5,6.4,0.25,0.9,0.1,0.5,0.5)
-mod<-dmm(nstates=2,itemt=c("n",2),stval=stv,conpat=conpat)
-\end{verbatim}
-
-\code{depmix4} can handle multiple cases or multiple groups. A
-multigroup model is specified using the function \code{mgdmm} as
-follows:
-
-\begin{verbatim}
-mgr <- mgdmm(dmm=mod,ng=3,trans=TRUE,obser=FALSE)
-mgrfree <- mgdmm(dmm=mod,ng=3,trans=FALSE)
-\end{verbatim}
-
-The \code{ng} argument specifies the number of groups, and the
-\code{dmm} argument specifies the model for each group.  \code{dmm}
-can be either a single model or list of models oflength \code{ng}.  If
-it is a single model, each group has an identical structural model
-(same fixed and constrained parameters).  Three further arguments can
-be used to constrain parameters between groups, \code{trans},
-\code{obser}, and \code{init} respectively.  By setting either of
-these to \code{TRUE}, the corresponding transition, observation, and
-initial state parameters are estimated equal between
-groups\footnote{There is at this moment no way of fine-tuning this to
-restrict equalities to individual parameters.  However, this can be
-accomplished by manually changing the linear constraint matrix, and
-the corresponding upper and lower boundaries.}.
-
-In this example, the model from above was used and fitted on the three
-observed series, and the \code{trans=TRUE} ensures that the transition
-matrix parameters are constrained to be equal between the models for
-these series, whereas the observation parameters are freely estimated,
-i.e.\ to capture learning effects. 
-
-The loglikelihood ratio statistic can be used to test whether
-constraining these transition parameters significantly reduces the
-goodness-of-fit of the model.  The statistic has an approximate
-$\chi^{2}$ distribution with $df=4$ because in each but the first
-model, two transition matrix parameters were estimated equal to the
-parameters in the first model (note that the other two transition
-parameters already had to be constrained to ensure that the rows of
-the transition matrices sum to 1).
-
-
-\section{Mixtures of latent Markov models}
-
-\code{depmix4} provides support for fitting mixtures of latent Markov
-models using the \code{mixdmm} function; it takes a list of
-\code{dmm}'s as argument, possibly together with the starting values
-for the mixing proportions for each component model.  There's an
-example in the helpfiles.  It fits the model to data from a
-discrimination learning experiment which is provided as data set
-\code{discrimination} \cite{Raijmakers2001}. 
-
-\section{Finite mixtures and latent class models}
-
-The function \code{lca} can be used to specify latent class models
-and/or finite mixture models.  It is simply a wrapper for the
-\code{dmm} function, and all it does is adding appropriate numbers of
-zeroes and ones to the parameter specification vectors for starting
-values, fixed values and linear constraints.  When a model has class
-\code{lca} the summary function does not print the transition matrix
-(because it is fixed and/or not estimated).
-
-
-\section{Starting values}
-
-Although providing your own starting values is preferable,
-\pkg{depmixS4} has a routine for generating starting values using the
-\code{kmeans}-function from the \pkg{stats}-package.  This will
-usually provide reasonable starting values, but can be way off in a
-number of cases.  First, for univariate categorical time series,
-\code{kmeans} does not work at all, and \pkg{depmixS4} will provide a
-warning.  Second, for multivariate series with unordered categorical
-items with more than 2 categories, \code{kmeans} may provide good
-starting values, but they may similarly be completely off, due to the
-implicit assumption in \code{kmeans} that the categories are
-indicating an underlying continuum.  Starting values using
-\code{kmeans} are automatically provided when a model is specified
-without starting values.  The argument \code{kmst} to the
-\code{fitdmm}-function can be used to control this behavior.
-
-Starting values of the parameters, either user provided or generated,
-can be further boosted by using posterior estimates using the Viterbi
-algorithm \cite{Rabiner1989}.  That is, first the a posteriori latent
-states are generated from the current parameter values for the data at
-hand.  Next, from the a posteriori latent states, new parameter
-estimates are derived.  This is done by default and can be controlled
-by the \code{postst} argument.  Provided that the starting values were
-close to their true values, using this procedure further pushes those
-parameters in the right direction.  If however the original values
-were bad, this procedure may result in bad estimates, i.e.,
-optimization will lead to some non-optimal local maximum of the
-loglikelihood.
-
-
-


More information about the depmix-commits mailing list