Daddy's Technology Notes

Read, think, and write down the notes.

Saturday, July 16, 2005

Linear discriminant analysis

Linear discriminant analysis (LDA), is sometimes known as Fisher's linear discriminant, after its inventor, Ronald A. Fisher, who published it in The Use of Multiple Measures in Taxonomic Problems (1936). It is typically used as a feature extraction step before classification.

LDA is used for two-class classification, or equivalently, given a vector of observations x, predict the probability of a binary random class variable c. LDA is based on the following observation: if the densities p(\vec x|c=1) and p(\vec x|c=0) are both normal, with identical full-rank covariances, but possibly different means, then a sufficient statistic for P(c|\vec x) is given by \vec x \cdot \vec w

\vec w = \Sigma^{-1} (\vec \mu_1 - \vec \mu_0)

That is, the probability of an input x being in a class c is purely a function of this dot product.

A nice property of this dot product is that, out of all possible one-dimensional projections, this one maximizes the distance between the projected means to the variance of the projected normal distributions. Thus, in some sense, this projection maximizes the signal to noise ratio.

In practice, this technique can be used by assuming that the two densities p(\vec x|c=1) and p(\vec x|c=0) have different means and shared covariance, and then use the maximum likelihood estimate or the maximum a posteriori estimate of the means and covariance.

LDA can be generalized to multiple discriminant analysis, where c becomes a categorical variable with N possible states, instead of only two. Analogously, if the class-conditional densities p(\vec x|c=i) are normal with shared covariances, the sufficient statistic for P(c|\vec x) are the values of N projections, which are the subspace spanned by the N means, affine projected by the inverse covariance matrix. These projections can be found by solving a generalized eigenvalue problem, where the numerator is the covariance matrix formed by treating the means as the samples, and the denominator is the shared covariance matrix.

0 Comments:

Post a Comment

<< Home