¡article¿ Spectral Unmixing]Optimization in Spectral Unmixing ] >
Introduction
Remote Sensing Analyis of Alteration Minerology Associated With Natural Acid
Drainage
in the Grizzly Peak Caldera, Sawatch Range, Colorado
David W. Coulter
Ph.D. Thesis Colorado School of Mines, 2006
West Red (Ruby Mountain) from Enterprise Peak
West Red Fe Endmembers from Aviris
Red: Hematite Green: Goethite Blue: Jarosite
(low pH) (high pH)
Cross-Correlation Spectral Matching
Linear Correlation Coefficient (Pearson’s ):
(1) |
. Generalize: cross-correlation at position :
(2) |
Measures of fit:
(3) |
is distributed in null case as Student’s .
(4) |
For non-uniform weights, weight the sums in eq. (1) by . Then goodness-of-fit
(5) |
where the incomplete gamma function is
If e.g. , then the goodness-of-fit is believable. If is larger than, say, 0.001, the fit may be acceptable if the the errors are non-normal or have been moderately underestimated. If then question the model and/or estimation proceedure eq. (2). If latter, consider robust estimation.
Other useful measures Skewness
(8) |
(9) |
where is auto-correlogram of z with itself, is number of match positions, is match number.
(10) |
This can be useful when trying to assess whether a parameterized fit changes ”significantly” when a given change of parameter produces the two references .
Optimized Cross-Correlation Spectral Matching In ”traditional” CCSM, the reference spectrum
z to which
is compared is taken to be a single (pure) endmember.
Next assume M endmembers
of interest, and linear mixing. Seek weighting factors
(11) |
(synthesized pixel intensity at band ) that maximize the cross correlogram
(12) |
at match position , places that maximum at , and minimizes its skew, subject to and . Significance of different values of resulting from different choices of endmember set may be assessed using eq. (10). Coulter [2] chooses endmember set that maximizes . As written (12) is the cross-correlation of two unit vectors:
which is independent of the normalization of
. Likewise,
if
is spectrum at a particular image pixel, then the unit normalization
rends
the
somewhat independent of shadow and striping.
Equation (14) makes
non-linear in the ; eq (13)
may be maximized w.r.t the
by the non-linear constrained optimizer of one’s choice. It does not
give much insight into the relative band-to-band errors inherent in
. As
written, it assumes they are all equal, and we can pretend:
(15) |
Given spectrometer calibration, we can do better than this.
Least-Squares Fitting Since is expressed in terms of unit vectors, maximizing is equivalent to minimizing
Minimizing the squared-difference of unit vectors and in (16) is still a non-linear problem in the because of how they appear in the denominator of the normalization of (14). However, if we relax the restrictions that z be a unit vector (and that ), we can define the equivalent problem
(18) |
which is linear in the ,
but not yet quite as general as we want. At each band
there are
(at least) three uncertainties in the sensor measurement: those in the received radiance
, those in the center
wavelength ,
and those in the sensor’s bandwidth, assumed as a fwhm of a gaussian. Fwhm is
typically on order of band-to-band wavelength spacing. Fwhm errors are usually of
secondary significance and will be ignored.
If we allow for uncertain band centers
and assume normal independent distributions, we can write
(19) |
as joint probability sensor records band radiance at recorded wavelength , when it actually received (unknown) radiance at actual (and unknown) wavelength . Joint probability of pixel’s measured spectrum across N bands is
(20) |
Define
Associate the (unknown) actual spectrum with the modeled mixture . Then
(23) |
represents a constrained optimization problem wherein we wish to minimize as a function of variables subject to . The motivation for including the is to (hopefully) obtain ”more” non-negative in the optimal solution.
Normal Equations
where
(26) |
and
(27) |
and
Eqs (28) and (30) make eq. (27) quadratic in the , so the optimization including center wavelengths uncertainties is non-linear. The can be computed efficiently enough, but the will need to be reconvolved against the new center wavelengths for each new set of center wavelengths required during the optimization. The problem will obviously be much simpler if we assume the provided center wavelength values are ”good enough” and simply ignore eqs (27). Dropping the superscript , equations (26) then become
(31) |
which are the usual linear least-squares normal equations for the . For what its worth, for the unconstrained problem the minimal solution vector will be unique provided the matrix is non-singular.
Image Mean Vector and Covariance Matrix [11, White 2005] If image has bands and pixels, the mean vector is
(32) |
where is the jth pixel vector of the image
(33) |
The image covariance matrix is
(34) |
where is the matrix of pixel vectors each of length
(35) |
is the matrix of identical mean vectors ( rows by columns):
(36) |
where is an matrix of ones.
Principal Component Transformation [7, Smith 1985] Karhunen-Loeve Transformation[12, White 2005] GRASS imagery i.pca: Let
(37) |
(38) |
the symmetric positive-definite image covariance matrix | |
are its orthonormal eigenvectors with eigenvalue . |
Magnitudes of impose ordering on transformed component vectors . Those with the largest s.t. are the Principal Components. should be related to the noise floor.
Minimum Noise Fraction[10, pg. 38] [3, ] We wish to find a particular coefficient matrix that in some sense maximizes the image S/N, assuming the image pixel vectors are the sum of uncorrelated signal and noise:
Maximize
(50) |
where is generalized eigenvalue of wrt , and are corresponding generalized eigenvectors. Compare with PCA:
(55) |
Noise Covariance Green suggests be of unit variance and band-to-band uncorrelated.
(56) |
is completely uncorrelated. In ideal case all are equal:
since the eigenvectors are orthonormal. if the variance .
Homogeneous Area Method [9, sec 2.9.1] If possible find homogenous area of pixels in image:
Local Means and Local Variances [9, sec 2.9.2]
is the local covariance matrix.
Local Means and Local Variances (con’t)
(68) |
is desired noise covariance matrix.
Other methods: ”unsupervised training” derived-endmember classification schemes e.g. LAS’ search ([13, ]) and GRASS’ cluster/maxlik are based upon local covariance minimization.
Vertex Component Analysis is an “unsupervised training” derived-endmember classification scheme. We use the notation of [5, Nascimento and Dias], which is slightly different than that used in previous sections.
Assume linear mixing, and let
Then the recorded spectral vector at pixel may be given by
Our goal is to find a abundance vectors corresponding to some endmember set . An appropriate endmember set is to be determined as part of the VCA algorithm. Endmember identification – the matching one or more of the VCA generated endmembers to actual specimen spectral samples such as from USGS spectral library or field ground-truth sampling – may be done in a subsequent processing step.
Since the set is a simplex, the set is also a simplex. However, even assuming , th observed vector set belongs to convex cone owing to different scale factors .
But the projective projection of the convec cone onto a properly chosen hyperplane is a simplex with vertices corresponding to the vertices of simplex . The simplex is the projective projection of the convex cone onto the hyperplane , where the choice of assures there are no observed vectors orthogonal to the hyperplane: .
VCA algorithm accuracy is dependent on image SNR. Several ways to estimate:
Inputs:
Outputs:
Notations:
See http://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse In particular “ the pseudoinverse for matrices related to can be computed by applying the ShermanMorrisonWoodbury formula to update the inverse of the correlation matrix, which may need less work. In particular, if the related matrix differs from the original one by only a changed, added or deleted row or column, incremental algorithms[12][13] exist that exploit the relationship.” – a fact that might be quite useful here.
1: Compute SNR
2: if
then
else
end if
14: initialize
, a pxp auxiliary
matrix, and
by SVD
15: for ()
do
22: end for
23: if
then
24: is
a
estimated mixing matrix.
25: else
26: is
a
estimated mixing matrix.
27: end if
[1] C.-I Chang and Q. Du. Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing, 42(3):608–619, March 2004.
[2] David W. Coulter. Remote Sensing Analysis of Alteration Mineralogy Associated with Natural Acid Drainage in the Grizzly Peak Caldera, Sawatch Range, Colorado. PhD thesis, Colorado School of Mines, Golden, Colorado, 2006.
[3] A.A. Green, M. Berman, P. Switzer, and M.D. Graig. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. Journal of Geophysical Reseach, 90:797 – 804, 1988.
[4] Fred A. Kruse. Comparison of aviris and hyperion for hyperspectral mineral mapping. http://w.hgimaging.com/PDF/Kruse_JPL2002_AVIRIS_Hyperion.pdf, 2002.
[5] José M. P. Nascimento and José M. Bioucas Dias. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, 43(4), April 2005.
[6] William H. Press, Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling. Numerical Recipes in C. Cambridge University Press, Cambridge, New York, Port Chester, Melbourne, Sidney, 1988.
[7] M.O. Smith, P.E. Johnson, and J.B. Adams. Quantitative determination of mineral types and abundances from reflectance spectra using principal component analysis. IEEE Transactions on Geoscience and Remote Sensing, 36:65 – 74, 1985.
[8] Frank D. van der Meer. Extraction of mineral absorbtion features from high-spectral resolution data using non-parameteric geostatistical techniques. International Journal of Remote Sensing, 15:2193–2214, 1994.
[9] Frank D. van der Meer and Steven M. de Jong. Imaging Spectroscopy. Kluwer Academic Publishers, Dordrecht, Boston, London, 2001.
[10] Frank D. van der Meer, Steven M. de Jong, and W. Bakker. Imaging Spectroscopy: Basic analytical techniques, pages 17–62. Kluwer Academic Publishers, Dordrecht, Boston, London, 2001.
[11] R. A. White. Image mean and covariance: http://dbwww.essc.psu.edu/lasdoc/user/covar.html, 2005.
[12] R. A. White. Karhunen-loeve transformation: http://dbwww.essc.psu.edu/lasdoc/user/karlov.html, 2005.
[13] R. A. White. Search unsupervised training site selection: http://dbwww.essc.psu.edu/lasdoc/user/search.html, 2005.