[IPSUR-commits] r103 - pkg/IPSUR/inst/doc www/book
noreply at r-forge.r-project.org
noreply at r-forge.r-project.org
Sat Dec 26 21:58:44 CET 2009
Author: gkerns
Date: 2009-12-26 21:58:43 +0100 (Sat, 26 Dec 2009)
New Revision: 103
Modified:
pkg/IPSUR/inst/doc/IPSUR.Rnw
www/book/index.php
Log:
small tweaks
Modified: pkg/IPSUR/inst/doc/IPSUR.Rnw
===================================================================
--- pkg/IPSUR/inst/doc/IPSUR.Rnw 2009-12-26 00:35:37 UTC (rev 102)
+++ pkg/IPSUR/inst/doc/IPSUR.Rnw 2009-12-26 20:58:43 UTC (rev 103)
@@ -1733,15 +1733,15 @@
\item what are data
\begin{itemize}
-\item the different types, especially quantitative versus qualitative, and
-discrete versus continuous
+\item different types, especially quantitative versus qualitative, and discrete
+versus continuous
\end{itemize}
-\item how to describe data both visually and numerically, and how the methods
-differ depending on the data type
-\item CUSS
-\item how to do all of the above but in the context of describing data broken
-down by groups
-\item the concept of factor and what it means for subdividing data
+\item fundamental properties of data distributions, including center, spread,
+shape, and crazy observations
+\item methods to describe data (visually/numerically) with respect to the
+properties, and how the methods differ depending on the data type
+\item all of the above in the context of grouped data, and in particular,
+the concept of a factor
\end{itemize}
\section{Types of Data\label{sec:Types-of-Data}}
@@ -3498,8 +3498,17 @@
\paragraph*{What do I want them to know?}
+\begin{itemize}
+\item there are multiple interpretations of probability, and the methods
+used depend somewhat on the philosophy chosen
+\item nuts and bolts of basic probability jargon: sample spaces, events,
+probability functions, \emph{etc}.
+\item how to count
+\item conditional probability and its relationship with independence
+\item Bayes' Rule and how it relates to the subjective view of probability
+\item what we mean by 'random variables', and where they come from
+\end{itemize}
-
\section{Sample Spaces}
For a random experiment $E$, the set of all possible outcomes of
@@ -5874,11 +5883,16 @@
\paragraph*{What do I want them to know?}
\begin{itemize}
-\item a lot of discrete models
-\item the idea of expectation and how to calculate it
-\item moment generating functions
-\item the dpqr family of functions, and their distr equivalents
-\item what a PMF is, supports,
+\item how to choose a reasonable discrete model under a variety of physical
+circumstances
+\item the notion of mathematical expectation, how to calculate it, and basic
+properties
+\item moment generating functions (yes, I want them to hear about those)
+\item the general tools of the trade for manipulation of continous random
+variables, integration, \emph{etc}.
+\item some details on a couple of discrete models, and exposure to a bunch
+of other ones
+\item how to make new discrete random variables from old ones
\end{itemize}
\section{Discrete Random Variables\label{sec:Discrete-Random-Variables}}
@@ -6343,7 +6357,7 @@
Simulate $k$ variates & & $\mathtt{rbinom(k,size=n,prob=p)}$ & & $\mathtt{r(X)(k)}$\tabularnewline
\hline
& & & & \tabularnewline
-\multicolumn{5}{r}{For $\mathtt{distr}$ need $\mathtt{X=Binom(size=n,prob=p)}$}\tabularnewline
+\multicolumn{5}{r}{For $\mathtt{distr}$ need \texttt{X <-} $\mathtt{Binom(size=}n\mathtt{,\ prob=}p\mathtt{)}$}\tabularnewline
\end{tabular}
\par\end{centering}
@@ -6352,24 +6366,7 @@
\end{table}
-%
-\begin{table}
-\begin{centering}
-\begin{tabular}{cccc}
-\multicolumn{1}{c}{} & & $\mathtt{X<-Binom(size=}n\mathtt{,\ prob=}p\mathtt{)}$\tabularnewline
-$\mathsf{dbinom}(x,\,\mathtt{size}=n,\,\mathtt{prob}=p)$ & $\P(X=x)$ & PMF & $\mathtt{d(X)}(x)$\tabularnewline
-$\mathsf{pbinom}(x,\,\mathtt{size}=n,\,\mathtt{prob}=p)$ & $\P(X\leq x)$ & CDF & $\mathsf{p}(\mathtt{X})(x)$\tabularnewline
-$\mathsf{rbinom}(k,\,\mathtt{size}=n,\,\mathtt{prob}=p)$ & & random variates & $\mathsf{r}(\mathtt{X})(k)$\tabularnewline
- & & & \tabularnewline
-\end{tabular}
-\par\end{centering}
-\caption{Correspondence between base \textsf{R} and \texttt{distr} with $X\sim\mathsf{dbinom}(\mathtt{size}=n,\,\mathtt{prob}=p)$}
-
-\end{table}
-
-
-
\section{Expectation and Moment Generating Functions\label{sec:Expectation-and-Moment}}
@@ -6389,16 +6386,16 @@
\begin{defn}
More generally, given a function $g$ we define the \emph{expected
value of} $g(X)$ by\begin{equation}
-\E\: g(X)=\sum_{x\in S}g(x)f_{X}(x),\end{equation}
-provided the (potentially infinite) series $\sum_{x}|g(x)|f(x)$ converges,
-and in that case we say that $\E g(X)$ \emph{exists}.
+\E\, g(X)=\sum_{x\in S}g(x)f_{X}(x),\end{equation}
+provided the (potentially infinite) series $\sum_{x}|g(x)|f(x)$ is
+convergent. We say that $\E g(X)$ \emph{exists}.
\end{defn}
In this notation the variance is $\sigma^{2}=\E(X-\mu)^{2}$ and we
prove the identity \begin{equation}
\E(X-\mu)^{2}=\E X^{2}-(\E X)^{2}\end{equation}
in Exercise BLANK. Intuitively, for repeated observations of $X$
we would expect the sample mean of the $g(X)$ values to closely approximate
-$\E\ g(X)$ as the sample size increases without bound.
+$\E\, g(X)$ as the sample size increases without bound.
Let us take the analogy further. If we expect $g(X)$ to be close
to $\E g(X)$ on the average, where would we expect $3g(X)$ to be
@@ -6425,9 +6422,9 @@
\begin{defn}
Given a random variable $X$, its \emph{moment generating function}
(abbreviated MGF) is defined by the formula\begin{equation}
-M_{X}(t)=\E\:\me^{tX}=\sum_{x\in S}\me^{tx}f_{X}(x),\end{equation}
-provided the (potentially infinite) series exists and is finite for
-all $t$ in a neighborhood of zero (that is, for all $-\epsilon<t<\epsilon$,
+M_{X}(t)=\E\me^{tX}=\sum_{x\in S}\me^{tx}f_{X}(x),\end{equation}
+provided the (potentially infinite) series is convergent for all $t$
+in a neighborhood of zero (that is, for all $-\epsilon<t<\epsilon$,
for some $\epsilon>0$).
\end{defn}
Note that for any MGF $M_{X}$, \begin{equation}
@@ -6453,9 +6450,10 @@
\subsection*{Applications}
-There are two uses of moment generating functions that will be used
-in this book. The first is the fact that the MGF may be used to accurately
-identify probability distributions, which rests on the following:
+We will discuss three applications of moment generating functions
+in this book. The first is the fact that an MGF may be used to accurately
+identify the probability distribution that generated it, which rests
+on the following:
\begin{thm}
The moment generating function, if it exists in a neighborhood of
zero, determines a probability distribution \emph{uniquely}. \end{thm}
@@ -6473,19 +6471,19 @@
\end{example}
-An MGF is also known as a {}``Laplace Transform'' and is used in
-that context in many branches of science and engineering.
+An MGF is also known as a {}``Laplace Transform'' and is manipulated
+in that context in many branches of science and engineering.
\subsection*{Why is it called a Moment Generating Function?}
-This brings us to the second powerful use of MGFs. Many of the models
-we study have a simple MGF indeed which allows us to determine the
-mean, variance, and even higher moments very quickly. Let us see why.
-We already know that
+This brings us to the second powerful application of MGFs. Many of
+the models we study have a simple MGF, indeed, which permits us to
+determine the mean, variance, and even higher moments very quickly.
+Let us see why. We already know that
\begin{alignat*}{1}
-M(t)= & \sum_{x\in S}\me^{tx}f(x),\end{alignat*}
+M(t)= & \sum_{x\in S}\me^{tx}f(x).\end{alignat*}
Take the derivative with respect to $t$ to get\begin{equation}
M'(t)=\frac{\diff}{\diff t}\left(\sum_{x\in S}\me^{tx}f(x)\right)=\sum_{x\in S}\ \frac{\diff}{\diff t}\left(\me^{tx}f(x)\right)=\sum_{x\in S}x\me^{tx}f(x),\end{equation}
and so if we plug in zero for $t$ we see \begin{equation}
@@ -6520,16 +6518,16 @@
\sigma^{2}= & \E X^{2}-(\E X)^{2},\\
= & n(n-1)p^{2}+np-n^{2}p^{2},\\
= & np-np^{2}=npq.\end{alignat*}
-\end{example}
+See how much easier that was?\end{example}
\begin{rem}
We learned in this section that $M^{(r)}(0)=\E X^{r}$. We remember
from Calculus II that certain functions $f$ can be represented by
a Taylor series expansion about a point $a$, which takes the form\begin{equation}
f(x)=\sum_{r=0}^{\infty}\frac{f^{(r)}(a)}{r!}(x-a)^{r},\quad\mbox{for all \ensuremath{|x-a|<R},}\end{equation}
where $R$ is called the \emph{radius of convergence} of the series
-(see Appendix BLANK). Now we may combine this information to say that
-if an MGF exists for all $t$ in the interval $(-\epsilon,\epsilon)$,
-then we may write\begin{equation}
+(see Appendix BLANK). We combine the two to say that if an MGF exists
+for all $t$ in the interval $(-\epsilon,\epsilon)$, then we can
+write\begin{equation}
M_{X}(t)=\sum_{r=0}^{\infty}\frac{\E X^{r}}{r!}t^{r},\quad\mbox{for all \ensuremath{|t|<\epsilon}.}\end{equation}
\end{rem}
@@ -6573,9 +6571,9 @@
Do an experiment $n$ times and observe $n$ values $x_{1}$, $x_{2}$,
\ldots{}, $x_{n}$ of a random variable $X$. For simplicity in most
-of the discussion that follows it will be convenient to suppose that
-the observed values are distinct, but comparable remarks remain valid
-even when the observed values are repeated.
+of the discussion that follows it will be convenient to imagine that
+the observed values are distinct, but the remarks are valid even when
+the observed values are repeated.
\begin{defn}
The \emph{empirical cumulative distribution function} $F_{n}$ (written
ECDF) is the probability distribution that places probability mass
@@ -6593,8 +6591,8 @@
The variance of the empirical distribution is\begin{equation}
\sigma^{2}=\sum_{x\in S}(x-\mu)^{2}f_{X}(x)=\sum_{i=1}^{n}(x_{i}-\xbar)^{2}\cdot\frac{1}{n}\end{equation}
and this last quantity looks very close to what we already know to
-be the sample variance.\[
-s^{2}=\frac{1}{n-1}\sum_{i=1}^{n}(x_{i}-\xbar)^{2}.\]
+be the sample variance.\begin{equation}
+s^{2}=\frac{1}{n-1}\sum_{i=1}^{n}(x_{i}-\xbar)^{2}.\end{equation}
The \emph{empirical quantile function} is the inverse of the ECDF.
See Section BLANK.
@@ -6603,7 +6601,7 @@
The empirical distribution is not directly available as a distribution
in the same way that the other base probability distributions are,
-but there are plenty of resources available.
+but there are plenty of resources available for the determined investigator.
Given a data vector of observed values \inputencoding{latin9}\lstinline[showstringspaces=false]!x!\inputencoding{utf8},
we can see the empirical CDF with the \inputencoding{latin9}\lstinline[showstringspaces=false]!ecdf!\inputencoding{utf8}
@@ -6614,12 +6612,13 @@
ecdf(x)
@
-The above shows that the returned value of \inputencoding{latin9}\lstinline[showstringspaces=false]!ecdf(x)!\inputencoding{utf8}is
-not a number but rather a \emph{function}. It is not usually used
-in this form, by itself. More commonly it is used as an intermediate
-step in a more complicated calculation, for instance, in hypothesis
-testing (see Section BLANK) or resampling (see Chapter BLANK). It
-is nevertheless instructive to see what the \inputencoding{latin9}\lstinline[showstringspaces=false]!ecdf!\inputencoding{utf8}
+The above shows that the returned value of \inputencoding{latin9}\lstinline[showstringspaces=false]!ecdf(x)!\inputencoding{utf8}
+is not a \emph{number} but rather a \emph{function}. The ECDF is not
+usually used by itself in this form, by itself. More commonly it is
+used as an intermediate step in a more complicated calculation, for
+instance, in hypothesis testing (see Section BLANK) or resampling
+(see Chapter BLANK). It is nevertheless instructive to see what the
+\inputencoding{latin9}\lstinline[showstringspaces=false]!ecdf!\inputencoding{utf8}
looks like, and there is a special plot method for \inputencoding{latin9}\lstinline[showstringspaces=false]!ecdf!\inputencoding{utf8}
objects.
@@ -6657,18 +6656,19 @@
To simulate from the empirical distribution supported on the vector
\inputencoding{latin9}\lstinline[showstringspaces=false]!x!\inputencoding{utf8},
-we can simply use the \inputencoding{latin9}\lstinline[showstringspaces=false]!sample!\inputencoding{utf8}
+we use the \inputencoding{latin9}\lstinline[showstringspaces=false]!sample!\inputencoding{utf8}
function.
<<>>=
x <- c(0,0,1)
-sample(x, size = 7, replace = TRUE) # should be 2/3
+sample(x, size = 7, replace = TRUE)
@
-We can get the empirical quantile function in \textsf{R} with \inputencoding{latin9}\lstinline[breaklines=true,showstringspaces=false]!quantile(x, probs = p, type = 1)!\inputencoding{utf8}.
+We can get the empirical quantile function in \textsf{R} with \inputencoding{latin9}\lstinline[breaklines=true,showstringspaces=false]!quantile(x, probs = p, type = 1)!\inputencoding{utf8};
+see Section BLANK.
-As we hinted above, the real significance of the empirical distribution
-is associated with its appearance in more sophisticated applications.
+As we hinted above, the empirical distribution is significant more
+because of how and where it appears in more sophisticated applications.
We will explore some of these in later chapters -- see, for instance,
Chapter BLANK.
@@ -6676,10 +6676,11 @@
\section{Other Discrete Distributions\label{sec:Other-Discrete-Distributions}}
The binomial and discrete uniform distributions are popular, and rightly
-so; they are simple and form the foundation for many other distributions.
-But the uniform and binomial only apply to a limited range of problems.
-In this section we introduce situations for which we need more than
-what the uniform and binomial offer.
+so; they are simple and form the foundation for many other more complicated
+distributions. But the particular uniform and binomial models only
+apply to a limited range of problems. In this section we introduce
+situations for which we need more than what the uniform and binomial
+offer.
\subsection{Dependent Bernoulli Trials\label{sec:Non-Bernoulli-Trials}}
@@ -7344,13 +7345,9 @@
\item We count the number of moth eggs on our window screen.
\item We count the number of blades of grass in a one square foot patch
of land.
-\item We count the number of pats on a baby's back until (she) burps.
+\item We count the number of pats on a baby's back until (s)he burps.
\end{enumerate}
-
-<<echo = FALSE, results = hide>>=
-rnorm(1)
-@
\begin{xca}
Find the constant $c$ so that the given function is a valid PDF of
a random variable $X$.\end{xca}
@@ -7361,8 +7358,9 @@
\item $f(x)=Cx^{3}(1-x)^{2},\quad0<x<1.$
\item ${\displaystyle f(x)=C(1+x^{2}/4)^{-1}},\quad-\infty<x<\infty.$\end{enumerate}
\begin{xca}
-Show that $\E(X-\mu)^{2}=\E X^{2}-\mu^{2}$. Hint: expand the quantity
-$(X-\mu)^{2}$ and distribute the expectation on the resulting terms.
+Show that $\E(X-\mu)^{2}=\E X^{2}-\mu^{2}$. \emph{Hint}: expand the
+quantity $(X-\mu)^{2}$ and distribute the expectation over the resulting
+terms.
\end{xca}
@@ -7384,8 +7382,17 @@
\paragraph*{What do I want them to know?}
+\begin{itemize}
+\item how to choose a reasonable continuous model under a variety of physical
+circumstances
+\item basic correspondence between continuous versus discrete random variables
+\item the general tools of the trade for manipulation of continous random
+variables, integration, \emph{etc}.
+\item some details on a couple of continuous models, and exposure to a bunch
+of other ones
+\item how to make new continuous random variables from old ones
+\end{itemize}
-
\section{Continuous Random Variables\label{sec:Continuous-Random-Variables}}
@@ -8441,13 +8448,13 @@
\paragraph*{What do I want them to know?}
\begin{itemize}
-\item joint distributions and marginal distributions (discrete and continuous)
-\item joint expectation and marginal expectation
-\item covariance and correlation
-\item conditional distributions and conditional expectation
-\item independence and exchangeability
-\item popular discrete joint distribution (multinomial)
-\item popular continuous distribution (multivariate normal)
+\item the basic notion of dependence and how it is manifested with multiple
+variables (two, in particular)
+\item joint versus marginal distributions/expectation (discrete and continuous)
+\item some numeric measures of dependence
+\item conditional distributions, in the context of independence and exchangeability
+\item some details of at least one multivariate model (discrete and continuous)
+\item what it looks like when there are more than two random variables present
\end{itemize}
\section{Joint and Marginal Probability Distributions\label{sec:Joint-Probability-Distributions}}
@@ -8806,11 +8813,10 @@
\end{example}
-We will do a continuous case example so that you can see how it works.
+We will do a continuous example so that you can see how it works.
\begin{example}
-Let us find the covariance of $(X,Y)$ in Example BLANK.
-
-The expected value of $X$ is\[
+Let us find the covariance of the variables $(X,Y)$ from Example
+BLANK. The expected value of $X$ is\[
\E X=\int_{0}^{1}x\cdot\frac{6}{5}\left(x+\frac{1}{3}\right)\diff x=\left.\frac{2}{5}x^{3}+\frac{1}{5}x^{2}\right|_{x=0}^{1}=\frac{3}{5},\]
and the expected value of $Y$ is\[
\E Y=\int_{0}^{1}y\cdot\frac{6}{5}\left(\frac{1}{2}+y^{2}\right)\diff x=\left.\frac{3}{10}y^{2}+\frac{3}{20}y^{4}\right|_{y=0}^{1}=\frac{9}{20}.\]
@@ -8851,9 +8857,8 @@
\section{Conditional Distributions\label{sec:Conditional-Distributions}}
-If $x\in S_{X}$ is such that $f_{X}(x)>0$, then we may define the
-\emph{conditional density of} $Y|\, X=x$, denoted $f_{Y|x}$, by
-\begin{equation}
+If $x\in S_{X}$ is such that $f_{X}(x)>0$, then we define the \emph{conditional
+density of} $Y|\, X=x$, denoted $f_{Y|x}$, by \begin{equation}
f_{Y|x}(y|x)=\frac{f_{X,Y}(x,y)}{f_{X}(x)},\quad y\in S_{Y}.\end{equation}
We define $f_{X|y}$ in a similar fashion.
\begin{example}
@@ -8971,8 +8976,8 @@
\begin{example}
In Example BLANK we considered the same experiment but different random
-variables: $U$ and $V$. We can see that $U$ and $V$ are not independent
-by finding a single pair $(u,v)$ where the independence equality
+variables $U$ and $V$. We can prove that $U$ and $V$ are not independent
+if we can find a single pair $(u,v)$ where the independence equality
does not hold. There are many such pairs. One of them is $(6,12)$:\[
f_{U,V}(6,12)=\frac{1}{36}\neq\left(\frac{11}{36}\right)\left(\frac{1}{36}\right)=f_{U}(6)\, f_{V}(12).\]
@@ -9007,9 +9012,8 @@
\begin{rem}
Unfortunately, the converse of Corollary BLANK is not true. That is,
-there are many random variables which are dependent even though their
-covariance and correlation is zero. For more details, see Casella
-BLANK. \end{rem}
+there are many random variables which are dependent yet their covariance
+and correlation is zero. For more details, see Casella BLANK. \end{rem}
\begin{cor}
If $X$ and $Y$ are independent, then the moment generating function
of $X+Y$ is \begin{equation}
@@ -9072,14 +9076,16 @@
and $Y$ are exchangeable if $f(x,y)=f(y,x)$ for all $(x,y)$.
Exchangeable random variables exhibit symmetry in the sense that a
-person may exchange one for the other, with no substantive changes
-to their random behavior. While independence speaks to a \emph{lack
-of influence} between the two variables, exchangeability seeks to
-capture the \emph{symmetry} between them, in the sense that one variable
-may be exchanged for the other without any substantive change to the
-joint distribution.
+person may exchange one variable for the other with no substantive
+changes to their joint random behavior. While independence speaks
+to a \emph{lack of influence} between the two variables, exchangeability
+aims to capture the \emph{symmetry} between them.
\begin{example}
-Here is another one, somewhat more complicated that the one above.\begin{multline}
+BLANK.
+\end{example}
+
+\begin{example}
+Here is another one, somewhat more complicated than the one above.\begin{multline}
f_{X,Y}(x,y)=(1+\alpha)\lambda^{2}\me^{-\lambda(x+y)}+\alpha(2\lambda)^{2}\me^{-2\lambda(x+y)}-2\alpha\lambda^{2}\left(\me^{-\lambda(2x+y)}+\me^{-\lambda(x+2y)}\right).\end{multline}
It is straightforward and tedious to check that $\iint f=1$. We may
see immediately that $f_{X,Y}(x,y)=f_{X,Y}(y,x)$ for all $(x,y)$,
@@ -9088,10 +9094,10 @@
one from the Farlie-Gumbel-Morgenstern family of distributions; see
BLANK.
-It is a misconception that exchangeability is a weaker condition than
-independence. In fact, the two notions are incommensurable. But one
-direct connection between the two is made clear by DeFinetti's Thereom.
-See Section BLANK for details.
+There seems to be a common misconception that exchangeability is somehow
+a weaker condition than independence, but in fact, the two notions
+are incommensurable. One direct connection between the two is made
+clear by DeFinetti's Thereom. See Section BLANK for details.
\end{example}
\section{The Bivariate Normal Distribution\label{sec:The-Bivariate-Normal}}
@@ -9537,26 +9543,17 @@
\paragraph*{What do I want them to know?}
\begin{itemize}
-\item Sampling Distributions of one-sample statistics,
-\item sampling distributions of two sample statistics.
-\item simulated sampling distributions
-\item What do I want them to know?
-\item what a srs(n) is
-\item the sampling distributions of popular statistics
-
-\begin{itemize}
-\item of xbar, s\textasciicircum{}2, and phat
+\item the notion of population versus simple random sample, parameter versus
+statistic, and population distribution versus sampling distribution
+\item the classical sampling distributions of the standard one and two sample
+statistics
+\item how to generate a simulated sampling distribution when the statistic
+is crazy
+\item the Central Limit Theorem, period.
+\item some basic concepts related to sampling distribution utility, such
+as bias and variance
\end{itemize}
-\item the sampling distributions of more complicated statistics (and how
-to generate them)
-\begin{itemize}
-\item the IQR, median, and mad
-\end{itemize}
-\item prove the CLT
-\item maybe mention the concepts of bias and variance of sampling distributions.
-\end{itemize}
-
\section{Simple Random Samples\label{sec:Simple-Random-Samples}}
@@ -10133,7 +10130,7 @@
\item use calculus to find an MLE for one-parameter families
\end{itemize}
\item about properties of the estimators they find, such as bias, minimum
-variance, MSE?, asymptotics?
+variance, MSE?
\item point versus interval estimation, and how to find and interpret confidence
intervals for basic experimental designs
\item the concept of margin of error and its relationship to sample size
@@ -11009,8 +11006,7 @@
with means, variances, and proportions
\item the notion of between versus within group variation and how it plays
out with one-way ANOVA
-\item the concept of statistical power and its relationship with sample
-size
+\item the concept of statistical power and its relation to sample size
\end{itemize}
\section{Introduction}
@@ -11545,10 +11541,12 @@
\paragraph*{What do I want them to know?}
\begin{itemize}
\item basic philosophy of SLR and the regression assumptions
-\item point and interval estimation of the parameters of the linear model
+\item point and interval estimation of the model parameters, and how to
+use it to make predictions
\item point and interval estimation of future observations from the model
-\item regression diagnostics including $R^{2}$ and residual analysis
-\item the concepts of influential versus outlying and how to tell the difference
+\item regression diagnostics, including $R^{2}$ and basic residual analysis
+\item the concept of influential versus outlying observations, and how to
+tell the difference
\end{itemize}
\section{Basic Philosophy\label{sec:Basic-Philosophy}}
@@ -14571,7 +14569,7 @@
\paragraph*{What do I want them to know?}
\begin{itemize}
-\item basic philosophy of resampling and why it is desired
+\item basic philosophy of resampling and why it is important
\item resampling for standard errors and confidence intervals
\item resampling for hypothesis tests (permutation tests)
\end{itemize}
Modified: www/book/index.php
===================================================================
--- www/book/index.php 2009-12-26 00:35:37 UTC (rev 102)
+++ www/book/index.php 2009-12-26 20:58:43 UTC (rev 103)
@@ -36,7 +36,7 @@
<p class="articleTitle">What <span class="name">IPSUR</span> is:</p>
<blockquote>
-<p> <span class="name">IPSUR</span> stands for <em>Introduction to Probability and Statistics Using R</em>, ISBN: 978-0-557-24979-4. It is a textbook written for an undergraduate course in probability and statistics. The approximate prerequisites are a couple semesters of calculus and some linear algebra in a few places. Typical students in my course include mathematics, engineering, and computer science majors.
+<p> <span class="name">IPSUR</span> stands for <em>Introduction to Probability and Statistics Using R</em>, ISBN: 978-0-557-24979-4, which is a textbook written for an undergraduate course in probability and statistics. The approximate prerequisites are two or three semesters of calculus and some linear algebra in a few places. Attendees of the class include mathematics, engineering, and computer science majors.
</p>
<p>
More information about the IPSUR-commits
mailing list