## Vertex Component Analysis in Hyperspectral Unmixing

### Contents

2 Image Classification and Statistics

2.1 Image Mean Vector and Covariance Matrix

2.2 Principal Component Transformation

2.3 Minimum Noise Fraction

3 Virtual Dimensionality

3.1 HFC

3.2 NWHFC

3.3 NSP

3.4 OSP

4 Vertex Component Analysis

4.1 VCA Algorithm

4.1.1 SNR

4.1.2 Algorithm 1: Vertex Component Analyis (VCA)

4.1.3 VCA Programming Notes

4.1.4 Icarus Image GUI Notes

### 1 Introduction

Introduction

Remote Sensing Analyis of Alteration Minerology Associated With Natural Acid Drainage

in the Grizzly Peak Caldera, Sawatch Range, Colorado

David W. Coulter

Ph.D. Thesis Colorado School of Mines, 2006

The Gizzly Peak Caldera is located Colorado’s Mineral Belt approximately 15 miles southeast of Aspen. It is the product of a volcanic eruption 3̃5 million years ago that ejected some 600 ${\text{km}}^{3}$ of magma. With a volcanic explosive index of 7, it was at least four times larger than Tambora.[3, sec 1.5]

Coulter sought to identify acidic conditions by the different weathering states of iron oxide, which forms Hematite (red) in acidic conditions, and Goethite (lite yellow) and Jarosite (dark yellow) as pH increases, Jarosite being the most immediate oxidation product of iron sulfide (pyrite).

Note on Partial Unmixing:

”Unmixing is critical in imaging spectrometry since virtually every pixel contains some macroscopic mixture of materials. The theory of, and methods for unmixing of spectroscopic signatures are found in a number of sources. Hapke (1993) provides models for linear and non-linear spectral mixing and a discussion of the criteria for using each approach. In Earth remote sensing, a linear mixing model is typically used. Full unmixing which assumes that the spectra of all possible components are known, is described by van der Meer (2000) and Boardman (1989). Since it is often impossible to identify, a priori, all possible components, partial unmixing is an important tool. Match Filtering (MF) (Harsanyi and Chang 1994), Constrained Energy Minimization (CEM similar to Match Filtering) (Farrand and Harsanyi 1997; Farrand 2001), and Mixture Tuned Match Filtering (${\text{MTMF}}^{\text{tm}}$) (Boardman et al. 1995) are commonly used methods for partial unmixing. ${\text{MTMF}}^{\text{tm}}$ is probably the most popular unmixing method used for geologic remote sensing.”[3, Coulter sec 1.4.4]

(Emphasis Added.)

### 2 Image Classification and Statistics

#### 2.1 Image Mean Vector and Covariance Matrix

Image Mean Vector and Covariance Matrix [10, White 2005] If image has $P$ bands and $N$ pixels, the mean vector is

$${\overrightarrow{m}}_{p}={\left({m}_{1},{m}_{2},...{m}_{P}\right)}^{T}=\frac{1}{N}\sum _{j=1}^{N}{\overrightarrow{f}}_{j}$$ | (1) |

where ${\overrightarrow{f}}_{j}$ is the jth pixel vector of the image

$${\overrightarrow{f}}_{j}={\left({f}_{i},{f}_{2},...{f}_{P}\right)}_{j}^{T}$$ | (2) |

The image covariance matrix ${C}_{\text{PxP}}$ is

$${C}_{\text{PxP}}=\frac{1}{N-1}\left({F}_{\text{PxN}}-{M}_{\text{PxN}}\right){\left({F}_{\text{PxN}}-{M}_{\text{PxN}}\right)}^{T}$$ | (3) |

where ${F}_{\text{PxN}}$ is the matrix of $N$ pixel vectors each of length $P$

$${F}_{\text{PxN}}={\overrightarrow{f}}_{1},{\overrightarrow{f}}_{2},{\overrightarrow{f}}_{3},...,{\overrightarrow{f}}_{N}$$ | (4) |

${M}_{\text{PxN}}$ is the matrix of $N$ identical mean vectors ($P$ rows by $N$ columns):

$${M}_{\text{PxN}}={\overrightarrow{m}}_{P},{\overrightarrow{m}}_{P},{\overrightarrow{m}}_{P},...,{\overrightarrow{m}}_{P}={\overrightarrow{m}}_{P}{1}_{\text{PxN}}$$ | (5) |

where ${1}_{\text{PxN}}$ is an $P\times N$ matrix of ones.

#### 2.2 Principal Component Transformation

Principal Component Transformation [7, Smith 1985] Karhunen-Loeve Transformation[11, White 2005] GRASS imagery i.pca: Let

$${X}_{\text{PxN}}=\left({F}_{\text{PxN}}-{M}_{\text{PxN}}\right)\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\text{zero-meanimagematrix}$$ | (6) |

$${Z}_{\text{PxN}}={A}_{\text{PxP}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}{X}_{\text{PxN}}$$ | (7) |

- ${F}_{\text{PxN}}$
- input-image multi-pixel vector ($P$ bands by $N$ pixels)
- ${M}_{\text{PxN}}$
- mean vector matrix,
- ${Z}_{\text{PxN}}$
- output-image multi-pixel vector,
- ${A}_{\text{PxP}}$
- $P\times P$ matrix whose rows are the eigenvectors of the covariance matrix ${C}_{\text{PxP}}$, arranged by decreasing magnitude of eigenvalue, as typically returned by SVD routines.

$$\begin{array}{rcll}{\lambda}_{i}{\delta}_{il}& =& \sum _{k=1}^{N}{Z}_{ik}{Z}_{lk}& \text{(10)}\text{}\text{}\\ {\lambda}_{i}& =& \sum _{k=1}^{N}\left(\sum _{j=1}^{P}{a}_{ij}{X}_{jk}\right)\left(\sum _{m=1}^{P}{a}_{lm}{X}_{mk}\right)& \text{(11)}\text{}\text{}\\ & =& \sum _{j=1}^{P}\sum _{m=1}^{P}{a}_{ij}{a}_{lm}\left[\sum _{k=1}^{N}{X}_{jk}{X}_{mk}\right]& \text{(12)}\text{}\text{}\\ & \equiv & \sum _{j=1}^{P}\sum _{m=1}^{P}{a}_{ij}{a}_{lm}{C}_{jm}& \text{(13)}\text{}\text{}\\ & =& {a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{C}_{\text{PxP}}\phantom{\rule{3.33234pt}{0ex}}{a}_{l}& \text{(14)}\text{}\text{}\end{array}$$

${C}_{\text{PxP}}$ | the symmetric positive-definite image covariance matrix |

${\overrightarrow{a}}_{i}$ | are its orthonormal eigenvectors with eigenvalue ${\lambda}_{i}$. |

Magnitudes of ${\lambda}_{i}$ impose ordering on transformed component vectors ${Z}_{ik}={\sum}_{j=1}^{P}{a}_{ij}{X}_{jk}$. Those with the largest ${\lambda}_{i}$ s.t. ${\lambda}_{i}\u2215{\lambda}_{max}>tol$ are the Principal Components. Tolerance tol should be related to the noise floor.

#### 2.3 Minimum Noise Fraction

Minimum Noise Fraction[9, pg. 38] [4, ] We wish to find a particular coefficient matrix $\left\{{a}_{ij};\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}i,j=1,\dots P\right\}$ that in some sense maximizes the image S/N, assuming the image pixel vectors $\left\{{\overrightarrow{X}}_{k},k=1,\dots N\right\}$ are the sum of uncorrelated signal and noise:

$$\begin{array}{rcll}{X}_{ik}& =& {S}_{ik}+{N}_{ik}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}i=1,\dots P\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}k=1,\dots N& \text{(15)}\text{}\text{}\\ {Z}_{ik}& =& \sum _{j=1}^{P}{a}_{ij}{X}_{jk}=\sum _{j=1}^{P}{a}_{ij}\left({S}_{jk}+{N}_{jk}\right)& \text{(16)}\text{}\text{}\\ {Z}_{i}^{T}& =& {a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{X}_{\text{PxN}}\phantom{\rule{3.33234pt}{0ex}}\text{where}{a}_{i}^{T}=\left({a}_{i1},{a}_{i2},\dots {a}_{iP}\right)& \text{(17)}\text{}\text{}\\ & =& {a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{S}_{\text{PxN}}+{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{N}_{\text{PxN}}& \text{(18)}\text{}\text{}\end{array}$$

Maximize

$$\begin{array}{rcll}R& =& \frac{\left({a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{S}_{\text{PxN}}\right){\left({a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{S}_{\text{PxN}}\right)}^{T}}{\left({a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{N}_{\text{PxN}}\right){\left({a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{N}_{\text{PxN}}\right)}^{T}}=\frac{{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}\left({S}_{\text{PxN}}\phantom{\rule{3.33234pt}{0ex}}{S}_{\text{PxN}}^{T}\right)\phantom{\rule{3.33234pt}{0ex}}{a}_{i}}{{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}\left({N}_{\text{PxN}}\phantom{\rule{3.33234pt}{0ex}}{N}_{\text{PxN}}^{T}\right)\phantom{\rule{3.33234pt}{0ex}}{a}_{i}}& \text{(20)}\text{}\text{}\\ & =& \frac{{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{C}_{\text{PxP}}^{S}\phantom{\rule{3.33234pt}{0ex}}{a}_{i}}{{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{C}_{\text{PxP}}^{N}\phantom{\rule{3.33234pt}{0ex}}{a}_{i}}=\frac{{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}\left({C}_{\text{PxP}}-{C}_{\text{PxP}}^{N}\right)\phantom{\rule{3.33234pt}{0ex}}{a}_{i}}{{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{C}_{\text{PxP}}^{N}\phantom{\rule{3.33234pt}{0ex}}{a}_{i}}& \text{(21)}\text{}\text{}\\ & =& \frac{{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{C}_{\text{PxP}}\phantom{\rule{3.33234pt}{0ex}}{a}_{i}}{{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{C}_{\text{PxP}}^{N}\phantom{\rule{3.33234pt}{0ex}}{a}_{i}}-1\phantom{\rule{3.33234pt}{0ex}};\text{IF}{C}_{\text{PxP}}={C}_{\text{PxP}}^{S}+{C}_{\text{PxP}}^{N}& \text{(22)}\text{}\text{}\\ & =& {\lambda}_{i}-1& \text{(23)}\text{}\text{}\end{array}$$

where ${\lambda}_{i}$ is generalized eigenvalue of ${C}_{\text{PxP}}$ wrt ${C}_{\text{PxP}}^{N}$, and ${a}_{i}$ are corresponding generalized eigenvectors. Compare with PCA:

Noise Covariance ${C}_{\text{PxP}}^{N}$ Green suggests ${C}_{\text{PxP}}$ be of unit variance and band-to-band uncorrelated.

- Sensor error estimate: (Aviris .rcc file) Call supplied error vector
$\left\{{e}_{j};\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}j=1,\dots P\right\}$.
Then
$${C}_{lm}^{N}={e}_{m}^{n}{\delta}_{lm}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\left(n=1\text{or}2\right)$$ (25) is completely uncorrelated. In ideal case all ${e}_{m}$ are equal:

$$\begin{array}{rcll}{C}_{lm}^{N}& =& e\phantom{\rule{3.33234pt}{0ex}}I\phantom{\rule{3.33234pt}{0ex}},\text{and}& \text{(26)}\text{}\text{}\\ {a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{C}_{\text{PxP}}\phantom{\rule{3.33234pt}{0ex}}{a}_{i}^{T}& =& {\lambda}_{i}\phantom{\rule{3.33234pt}{0ex}}{a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}{C}_{\text{PxP}}^{N}\phantom{\rule{3.33234pt}{0ex}}{a}_{i}={\lambda}_{i}e\phantom{\rule{3.33234pt}{0ex}}\left({a}_{i}^{T}\phantom{\rule{3.33234pt}{0ex}}I\phantom{\rule{3.33234pt}{0ex}}{a}_{i}\right)& \text{(27)}\text{}\text{}\\ & =& {\lambda}_{i}\phantom{\rule{3.33234pt}{0ex}}e\phantom{\rule{3.33234pt}{0ex}}\left({a}_{i}^{T}\cdot {a}_{i}\right)={\lambda}_{i}\phantom{\rule{3.33234pt}{0ex}}e={\lambda}_{i}^{PCA}& \text{(28)}\text{}\text{}\end{array}$$since the eigenvectors ${a}_{i}$ are orthonormal. ${\lambda}_{i}={\lambda}_{i}^{PCA}$ if the variance $e\equiv 1$.

Homogeneous Area Method [8, sec 2.9.1] If possible find homogenous area of ${N}_{h}$ pixels in image:

$$\begin{array}{rcll}{\overrightarrow{M}}_{\text{L}}& =& \frac{1}{{N}_{k}}\sum _{k=1}^{{N}_{h}}{\overrightarrow{X}}_{k}\text{localmean,vectoroverbands}& \text{(29)}\text{}\text{}\\ {\overrightarrow{\sigma}}_{\text{L}}& =& \frac{1}{{N}_{k}-1}\left[\sum _{k=1}^{{N}_{h}}{\left({\overrightarrow{X}}_{k}-{\overrightarrow{M}}_{\text{L}}\right)}^{2}\right]{\phantom{\rule{3.33234pt}{0ex}}}^{1\u22152}& \text{(30)}\text{}\text{}\\ {\sigma}_{Li}& =& \frac{1}{{N}_{k}-1}{\left[\sum _{k=1}^{{N}_{h}}{\left({X}_{ik}-{M}_{Li}\right)}^{2}\right]}^{1\u22152}& \text{(31)}\text{}\text{}\\ \phantom{\rule{3.33234pt}{0ex}}{C}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}ij}& =& \frac{1}{{N}_{k}-1}\sum _{k=1}^{{N}_{h}}\left({X}_{ik}-{M}_{Li}\right)\left({X}_{jk}-{M}_{Lj}\right)\text{(general)}& \text{(32)}\text{}\text{}\\ & =& \frac{{\delta}_{ij}}{{N}_{k}-1}\sum _{k=1}^{{N}_{h}}{\left({X}_{ik}-{M}_{Li}\right)}^{2}\text{(zeroband-to-band)}& \text{(33)}\text{}\text{}\end{array}$$

Local Means and Local Variances [8, sec 2.9.2]

- 1.
- Divide image into small ${N}_{b}$ pixel blocks (4x4, 5x5,...)
- 2.
- For each (block, band) get local mean and variance:
$$\begin{array}{rcll}\phantom{\rule{3.33234pt}{0ex}}{M}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}i}& =& \frac{1}{{N}_{b}}\sum _{k=1}^{{N}_{b}}{X}_{ik}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}i=1,2,\dots P\text{bands}& \text{(34)}\text{}\text{}\\ \phantom{\rule{3.33234pt}{0ex}}{{\sigma}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}i}}^{2}& =& \frac{1}{{N}_{b}-1}\sum _{k=1}^{{N}_{b}}{\left({X}_{ik}-{M}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}i}\right)}^{2}& \text{(35)}\text{}\text{}\\ \phantom{\rule{3.33234pt}{0ex}}{C}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}ij}& =& \frac{\left({\delta}_{ij}\right)}{{N}_{k}-1}\sum _{k=1}^{{N}_{b}}\left({X}_{ik}-{M}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}i}\right)\left({X}_{jk}-{M}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}j}\right)\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}& \text{(36)}\text{}\text{}\end{array}$$
$C$ is the local ${N}_{b}\times {N}_{b}$covariance matrix.

- 3.
- bin $\left\{{{\sigma}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}i}}^{2}\right\}$ into classes between band min and max values.
- 4.
- bin with most blocks represents mean noise of image. Hope this bin is same for all bands...

Local Means and Local Variances (con’t)

- 5.
- Suppose ”most popular” bin is a P-cube, each side ranging
$\left[{\sigma}_{i}^{\ast},{\sigma}_{i}^{\ast}+\u25b3{\sigma}_{i}^{\ast}\right]$, contains
${N}^{\ast}$
points. Then the average value over the bin
$$\overline{{C}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}ij}}=\frac{1}{{N}^{\ast}}\sum _{k=1}^{{N}^{\ast}}{\left({C}_{\text{L}\phantom{\rule{3.33234pt}{0ex}}ij}\right)}_{k}$$ (37) is desired noise covariance matrix.

- 6.
- Caveat: Assumes image is ”slowly varying enough” that enough of the ${N}_{\text{total}}$ blocks are homogeneous, i.e. their covariance is really due to noise, not true features in the image. Blocks and bins must both be ”small enough” – but not too small!

Other methods: ”unsupervised training” derived-endmember classification schemes e.g. LAS’ search ([12, ]) and GRASS’ cluster/maxlik are based upon local covariance minimization.

### 3 Virtual Dimensionality

We implement the Harsani-Farrand-Chang (HFC) and noise-whitened Harsani-Farrand-Chang (NWHFC) esimators of the image virtual dimensionality VD = $p\le L$ as described in [1, C.-I Chang and Du 2004] and [2, C.-I Chang 2013] Chapter 5.3.6.

#### 3.1 HFC

Let ${R}_{\text{LxN}}\equiv \left[{r}_{1},{r}_{2},\dots {r}_{N}\right]$ be the input image matrix of $N$ pixel (sample) vectors ${r}_{n}$ each of length $L$ spectral bands: ${r}_{i}={\left({r}_{i1},{r}_{i2},\dots {r}_{i,L}\right)}^{T};1\le i\le N$. The image spectral correlation matrix ${R}_{\text{LxL}}$ and covariance matrix ${K}_{\text{LxL}}$ are then

$$\begin{array}{rcll}{R}_{\text{LxL}}& =& \frac{1}{N}{R}_{\text{LxN}}{R}_{\text{LxN}}^{T}& \text{(38)}\text{}\text{}\\ {K}_{\text{LxL}}& =& \frac{1}{N}\left({R}_{\text{LxN}}-\mu \right){\left({R}_{\text{LxN}}-\mu \right)}^{T}\text{where}& \text{(39)}\text{}\text{}\\ {\mu}_{l}& =& \frac{1}{N}\sum _{i=1}^{N}{r}_{il}\text{isthevectorofimagespectralmeans}& \text{(40)}\text{}\text{}\end{array}$$

Let $\left\{{\widehat{\lambda}}_{1}\ge {\widehat{\lambda}}_{2}\ge \cdots \ge {\widehat{\lambda}}_{L}\right\}$ and $\left\{{\lambda}_{1}\ge {\lambda}_{2}\ge \cdots \ge {\lambda}_{L}\right\}$ denote the (ordered) correlation and covariance eigenvalues. By assuming that signal sources are nonrandom unknown positive constants and noise is white with zero mean, we may expect

$$\begin{array}{rcll}{\widehat{\lambda}}_{l}& >& {\lambda}_{l}\text{for}\phantom{\rule{1em}{0ex}}l=1,\dots ,\text{VD}& \text{(41)}\text{}\text{}\\ {\widehat{\lambda}}_{l}& =& {\lambda}_{l}={\sigma}_{nl}^{2}\text{for}\phantom{\rule{1em}{0ex}}l=\text{VD}+1,\dots ,L& \text{(42)}\text{}\text{}\\ & & & \text{(43)}\text{}\text{}\end{array}$$

where ${\sigma}_{nl}^{2}$ is the noise variance in the $l$th spectral channel. Formulate the VD determination as a binary hypothesis problem:

$$\begin{array}{rcll}{H}_{0}:{z}_{l}& =& {\widehat{\lambda}}_{l}-{\lambda}_{l}=0\text{versus}& \text{(44)}\text{}\text{}\\ {H}_{1}:{z}_{l}& =& {\widehat{\lambda}}_{l}-{\lambda}_{l}0\text{for}l=1,2,\dots ,L& \text{(45)}\text{}\text{}\end{array}$$

The null hypothesis ${H}_{0}$ and the alternative hypothesis ${H}_{1}$ represent the case that the correlation eigenvalue is equal to its corresponding covariance eigenvalue (no signal), and the case that the corrleation eigenvalue is greater than its corresponding covariance eigenvalue. When ${H}_{1}$ is true (i.e. ${H}_{0}$ fails), it implies there is an endmember contributing to the correlation eigenvalue in addition to noise, since the noise energy represented by the eigenvalue of ${R}_{\text{LxL}}$ in that particular component is the same as the one represented by the eigenvalue of ${K}_{\text{LxL}}$ in its corresponding component.

If we assume the noise in each spectral dimension has zero mean and varance ${\sigma}_{nl}^{2}$, then ${\widehat{\lambda}}_{l}={\mu}_{l}^{2}+{\sigma}_{nl}^{2}$ and ${\lambda}_{l}={\sigma}_{nl}^{2}$ where ${\mu}_{l}$ is the sample mean in the $l$th spectral dimension and ${\sigma}_{nl}^{2}$ is the channel noise. Despite the fact the $\widehat{\lambda}$ and $\lambda $ are unknown constants, we can model each pair of eigenvalues $\left(\widehat{\lambda},\lambda \right)$ under hypothesis ${H}_{0}$ and ${H}_{1}$ as random variables with asymptotic probability densities given by

$$\begin{array}{rcll}{p}_{0}\left({z}_{l}\right)& =& p\left({z}_{l}|{H}_{0}\right)\simeq N\left(0,{\sigma}_{zl}^{2}\right)\text{and}& \text{(46)}\text{}\text{}\\ {p}_{1}\left({z}_{l}\right)& =& p\left({z}_{l}|{H}_{1}\right)\simeq N\left({\mu}_{l},{\sigma}_{zl}^{2}\right)\text{for}\phantom{\rule{1em}{0ex}}l=1,2,\dots ,L& \text{(47)}\text{}\text{}\\ N\left(\mu ,{\sigma}^{2}\right)& =& \frac{1}{\sigma \sqrt{2\pi}}exp\left[\frac{-{\left(x-\mu \right)}^{2}}{2{\sigma}^{2}}\right]& \text{(48)}\text{}\text{}\end{array}$$

respectively, where ${\mu}_{l}$ is an unknown constant for each $l$ and the variance ${\sigma}_{zl}^{2}$ is given by

“When the total number of samples $N$ is sufficiently large, $\text{Var}\left[{\widehat{\lambda}}_{l}\right]\simeq 2{\widehat{\lambda}}_{l}^{2}\u2215N$ and $\text{Var}\left[{\lambda}_{l}\right]\simeq 2{\lambda}_{l}^{2}\u2215N$. Therefore, the noise variance ${\sigma}_{zl}^{2}$ in (42) can be estimated and approximated using (49).” (Although its not obvious exactly how. Play statisitics on collections of subimages, perhaps? In his MATLAB code, Chang approximates ${\sigma}_{zl}^{2}\simeq \left(2\u2215N\right)\left({\widehat{\lambda}}_{l}^{2}+{\lambda}_{l}^{2}\right)$, so we do as well.)

Chang and Du ([1, eq. 8]) use Schwartz’ inequality to bound

$$\begin{array}{rcll}\text{Cov}\left({\widehat{\lambda}}_{l},{\lambda}_{l}\right)& \le & \sqrt{\text{Var}\left[{\widehat{\lambda}}_{l}\right]\text{Var}\left[{\lambda}_{l}\right]}\simeq \frac{2}{N}\left({\widehat{\lambda}}_{l}\cdot {\lambda}_{l}\right)\text{sothat}& \text{(50)}\text{}\text{}\\ {\sigma}_{zl}^{2}& \ge & \frac{2}{N}{\left({\widehat{\lambda}}_{l}-{\lambda}_{l}\right)}^{2}& \text{(51)}\text{}\text{}\\ {\sigma}_{zl}^{2}& \le & \frac{2}{N}\left({\widehat{\lambda}}_{l}^{2}+{\lambda}_{l}^{2}\right)& \text{(52)}\text{}\text{}\\ & & & \text{(53)}\text{}\text{}\end{array}$$

From (46), (47), and (49), we define the false alarm probability and detection probability as

$$\begin{array}{rcll}{P}_{F}& =& {\int}_{{\tau}_{l}}^{\infty}{p}_{0}\left(z\right)dz& \text{(54)}\text{}\text{}\\ {P}_{D}& =& {\int}_{{\tau}_{l}}^{\infty}{p}_{1}\left(z\right)dz& \text{(55)}\text{}\text{}\end{array}$$

We then choose a given false-alarm probability ${P}_{F}$ (for example 0.001) and invert (54) to give

$$\begin{array}{rcll}{\tau}_{l}& =& {\sigma}_{l}\sqrt{2}\text{erfc\_inv}\left(2{P}_{F}\right)\text{where}& \text{(56)}\text{}\text{}\\ {\sigma}_{l}& \simeq & \sqrt{2\left({\widehat{\lambda}}_{l}^{2}+{\lambda}_{l}^{2}\right)\u2215N}& \text{(57)}\text{}\text{}\end{array}$$

A case of ${\widehat{\lambda}}_{l}-{\lambda}_{l}>{\tau}_{l}$ fails the null hypothesis test, and indicates there is signal energy assumed to contribute to the eigenvalue ${\widehat{\lambda}}_{l}$ in the $l$th spectral dimension. Do note the threshold ${\tau}_{l}$ is different for each spectral dimension $l$.

#### 3.2 NWHFC

As noted by Chang ([2, Section 5.3.6]), the signature variance ${\sigma}_{sl}^{2}$ is generally very small and can be influenced by interband covariances. Interband covariance may be reduced by “noise whitening”: preprocessing by multiplying the input image by a noise covariance matrix estimated by a technique developed by Roger and Arnold, which may be performed prior to the HFC method. This is discussed far too sparsely in ([1, Chang and Du]) and hardly at all in ([2, Chang 2013]), so we’ll merely reference the former and translate Matlab code from the latter (Section A.1).

$$\begin{array}{rcll}{K}_{\text{inv}}& =& {K}_{\text{LxL}}^{-1}& \text{(58)}\text{}\text{}\\ \kappa & =& \text{diag}\left({K}_{\text{inv}}\right)& \text{(59)}\text{}\text{}\\ {K}_{\text{noise}}& =& \text{diag}\left(1.\u2215\kappa \right)& \text{(60)}\text{}\text{}\\ & & & \text{(61)}\text{}\text{}\end{array}$$

The noise covariance matrix ${K}_{\text{noise}}$ is the LxL diagonal matrix whose elements are the inverse of the diagonal elements of ${K}_{\text{LxL}}^{-1}$. For noise whitening we want the square root of its inverse, denoted ${K}_{n}^{-\left(1\u22152\right)}$, also diagonal:

$$\begin{array}{rcll}{K}_{n}^{-\left(1\u22152\right)}& =& {\left(\sqrt{{K}_{\text{noise}}}\right)}^{-1}& \text{(62)}\text{}\text{}\\ {K}_{n}^{-\left(1\u22152\right){}_{}}l,l& =& \sqrt{{K}_{l,l}^{-1}}& \text{(63)}\text{}\text{}\\ {Y}_{\text{LxN}}& =& {K}_{n}^{-\left(1\u22152\right)}\phantom{\rule{3.33234pt}{0ex}}{R}_{\text{LxN}}& \text{(64)}\text{}\text{}\\ \text{VD}& =& \text{HFC}\left(Y,t\right)& \text{(65)}\text{}\text{}\end{array}$$

$Y$ is the noise-whitened image matrix. The NWHFC VD estimate is obtained simply by running the HFC algorithm on $Y$ rather than $R$.

#### 3.3 NSP

The Noise Subspace Projection method ([1, ]) estimates VD solely from the spectral covariance matrix ${K}_{\text{LxL}}$. Let the matrix ${K}_{n}^{-\left(1\u22152\right)}$ be as above and define the noise-whitened covariance matrix

$$\begin{array}{rcll}\stackrel{\u0304}{K}& =& {K}_{n}^{-\left(1\u22152\right)}\phantom{\rule{3.33234pt}{0ex}}{K}_{\text{LxL}}\phantom{\rule{3.33234pt}{0ex}}{K}_{n}^{-\left(1\u22152\right)}& \text{(66)}\text{}\text{}\end{array}$$

As result, the noise variance of each band in the whitened $\stackrel{\u0304}{K}$ is reduced to unity. Let ${\left\{{u}_{l}\right\}}_{l=1}^{L}$ be a set of eigenvectors generated by $\stackrel{\u0304}{K}$, which can then be expressed as

$$\begin{array}{rcll}\stackrel{\u0304}{K}& =& \sum _{l=1}^{\text{VD}}{\stackrel{\u0304}{\lambda}}_{l}{u}_{l}{u}_{l}^{T}+\sum _{l=\text{VD}+1}^{L}{\stackrel{\u0304}{\lambda}}_{l}{u}_{l}{u}_{l}^{T}& \text{(67)}\text{}\text{}\end{array}$$

where ${\left\{{u}_{l}\right\}}_{l=1}^{\text{VD}}$ and ${\left\{{u}_{l}\right\}}_{l=\text{VD}+1}^{L}$ span signal subspace and noise subspace respectively. The variances of the noise components of (67) have been whitened and normalized to unity, so ${\stackrel{\u0304}{\lambda}}_{l}=1;l=\text{VD}+1,\dots ,L$ and $\stackrel{\u0304}{\lambda}>1$ otherwise. The problem of VD estimation can then be formulated as a binary hypothesis testing problem:

$$\begin{array}{rcll}{H}_{0}:{y}_{l}& =& {\stackrel{\u0304}{\lambda}}_{l}=1\text{versus}& \text{(68)}\text{}\text{}\\ {H}_{1}:{y}_{l}& =& {\stackrel{\u0304}{\lambda}}_{l}1\text{for}l=1,2,\dots ,L& \text{(69)}\text{}\text{}\end{array}$$

where

$$\begin{array}{rcll}{p}_{0}\left({y}_{l}\right)& =& p\left({y}_{l}|{H}_{0}\right)\simeq N\left(1,{\sigma}_{yl}^{2}\right)\text{and}& \text{(70)}\text{}\text{}\\ {p}_{1}\left({y}_{l}\right)& =& p\left({y}_{l}|{H}_{1}\right)\simeq N\left({\mu}_{l},{\sigma}_{yl}^{2}\right)\text{for}\phantom{\rule{1em}{0ex}}l=1,2,\dots ,L& \text{(71)}\text{}\text{}\\ N\left(\mu ,{\sigma}^{2}\right)& =& \frac{1}{\sigma \sqrt{2\pi}}exp\left[\frac{-{\left(x-\mu \right)}^{2}}{2{\sigma}^{2}}\right]& \text{(72)}\text{}\text{}\end{array}$$

where ${\mu}_{l}$ is an unknown constant for each $l$ and the variance ${\sigma}_{yl}^{2}$ is given by

$${\sigma}_{yl}^{2}=\text{Var}\left[{\stackrel{\u0304}{\lambda}}_{l}\right]\simeq \frac{2{\stackrel{\u0304}{\lambda}}_{l}^{2}}{N}$$ | (73) |

which can be further reduced under hypothesis ${H}_{0}$ to

$${\sigma}_{yl}^{2}\simeq \frac{2}{N}$$ | (74) |

Finally, find the Neyman-Pearson detector ${\delta}_{\text{NP}}$ for (68) to determine VD. The false-alarm probability ${P}_{F}$ is the probability that we think we’ve detected a signal when none was actually present. In terms of eigenvalues ${\stackrel{\u0304}{\lambda}}_{l}$ this the probability of measuring a value ${\stackrel{\u0304}{\lambda}}_{l}>1$ when the “true” value was its minimum noise-only value ${\stackrel{\u0304}{\lambda}}_{l}=1$. For a desired ${P}_{F}$ we seek a threshold ${\tau}_{l}$ such that

$$\begin{array}{rcll}{P}_{F}& =& {\int}_{1+{\tau}_{l}}^{\infty}{p}_{0}\left(y\right)dy\simeq {\int}_{1+{\tau}_{l}}^{\infty}N\left(1,{\sigma}_{l}^{2}\right)dy& \text{(75)}\text{}\text{}\\ & =& \frac{1}{{\sigma}_{l}\sqrt{2\pi}}{\int}_{1+{\tau}_{l}}^{\infty}exp\left[\frac{-{\left(y-1\right)}^{2}}{2{\sigma}_{l}^{2}}\right]dy& \text{(76)}\text{}\text{}\\ & =& \frac{1}{{\sigma}_{l}\sqrt{2\pi}}{\int}_{{\tau}_{l}}^{\infty}exp\left[\frac{-{x}^{2}}{2{\sigma}_{l}^{2}}\right]dx& \text{(77)}\text{}\text{}\\ & =& \frac{1}{\sqrt{\pi}}{\int}_{{\tau}_{l}\u2215\left({\sigma}_{l}\sqrt{2}\right)}^{\infty}exp\left[-{z}^{2}\right]dz& \text{(78)}\text{}\text{}\\ & =& \frac{1}{2}\text{erfc}\left({\tau}_{l}\u2215\left({\sigma}_{l}\sqrt{2}\right)\right)& \text{(79)}\text{}\text{}\end{array}$$

with solution

$$\begin{array}{rcll}{\tau}_{l}& =& \left({\sigma}_{l}\sqrt{2}\right)\text{erfc\_inv}\left(2{P}_{F}\right)& \text{(80)}\text{}\text{}\\ & \simeq & \frac{2{\stackrel{\u0304}{\lambda}}_{l}}{\sqrt{N}}\text{erfc\_inv}\left(2{P}_{F}\right)& \text{(81)}\text{}\text{}\\ & \simeq & \frac{2}{\sqrt{N}}\text{erfc\_inv}\left(2{P}_{F}\right)& \text{(82)}\text{}\text{}\end{array}$$

The virtual dimensionality VD is then the number of eigenvalues ${\stackrel{\u0304}{\lambda}}_{l}$ whose value exceeds the threshold $1+{\tau}_{l}$: ${\stackrel{\u0304}{\lambda}}_{l}\ge 1+{\tau}_{l};\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}l=1,2,\dots ,\text{VD}$.

#### 3.4 OSP

Orthogonal Subspace Projection ([2, 5.4.1]). OSP is a general technique that also works well as part of VCA (4).

Assume that there are $p$ signal sources with L bands, $\left\{{s}_{1},{s}_{2},\dots {s}_{p}\right\}$, present in the data to be processed and every data sample vector ${r}_{i}$ can be expressed by a linear mixture of these p signal sources as

$${r}_{i}={S}_{p}{\alpha}_{i}+{n}_{i}$$ | (83) |

where ${S}_{p}=\left[{s}_{1}\phantom{\rule{3.33234pt}{0ex}}{s}_{2}\cdots {s}_{p}\right]$ is a signal matrix made up of the p signal sources, $\left\{{s}_{1},{s}_{2},\dots {s}_{p}\right\}$, and ${n}_{i}$ can be interpreted as the noise vector or model error vector.

Let ${P}_{p}={S}_{p}{\left({S}_{p}^{T}{S}_{p}\right)}^{-1}{S}_{p}^{T}$ be the p-signal projection matrix formed by the signal matrix ${S}_{p}$. It can then be used to map all data sample vectors $\left\{{r}_{1},{r}_{2},\dots ,{r}_{N}\right\}$ into the spectral space linearly spanned by the $p$ signal sources $\left\{{s}_{1},{s}_{2},\dots {s}_{p}\right\}$. In other words, every data sample vector ${r}_{i}$ can be expressed by a linear mixture of the $p$ signal sources,$\left\{{s}_{1},{s}_{2},\dots {s}_{p}\right\}$, specified by (83) via the $p$-signal projection matrix ${P}_{p}$ where the noise ${n}_{i}$ in (83) is included to account for the linear mixture model error:

$$\begin{array}{rcll}{\alpha}_{i}& =& {S}_{p}^{-1}\left({r}_{i}-{n}_{i}\right)& \text{(84)}\text{}\text{}\\ & =& {\left({S}_{p}^{T}{S}_{p}\right)}^{-1}{S}_{p}^{T}\left({r}_{i}-{n}_{i}\right)& \text{(85)}\text{}\text{}\end{array}$$

Since each sample vector produces a different model error and residual, the sample mean vector is used to represent an averaged model error and residual. In this case, from (83) the sample mean vector $\mu $ can be expressed by

$$\begin{array}{rcll}\mu & =& \left(1\u2215N\right)\left[{S}_{p}\sum _{i=1}^{N}{\alpha}_{i}+\sum _{i=1}^{N}{n}_{i}\right]& \text{(86)}\text{}\text{}\\ & =& {S}_{p}\left(\left(1\u2215N\right)\sum _{i=1}^{N}{\alpha}_{i}\right)+\left(1\u2215N\right)\sum _{i=1}^{N}{n}_{i}& \text{(87)}\text{}\text{}\\ & =& {S}_{p}{\stackrel{\u0304}{\alpha}}_{p}+\stackrel{\u0304}{n}& \text{(88)}\text{}\text{}\end{array}$$

where ${\stackrel{\u0304}{\alpha}}_{p}=\left(1\u2215N\right){\sum}_{i=1}^{N}{\alpha}_{i}$ and $\stackrel{\u0304}{n}=\left(1\u2215N\right){\sum}_{i=1}^{N}{n}_{i}$. There are two ways to find ${\text{VD}}^{\text{OSP}}$.

$$\begin{array}{rcll}\text{OSP}\left(p\right)& =& E\left[{\left({P}_{p}\mu \right)}^{T}\left({P}_{p}\mu \right)\right]\text{or}& \text{(89)}\text{}\text{}\\ \text{OSP}\left(p\right)& \simeq & {\stackrel{\u0304}{\alpha}}_{p}^{T}{S}_{p}^{T}{S}_{p}{\stackrel{\u0304}{\alpha}}_{p}& \text{(90)}\text{}\text{}\end{array}$$

without involving noise covariance matrix estimation. Theoretically, the value of OSP($p$) in (89) increases as the value of $p$ increases. For any given error threshold $\u03f5$, VD can be determined by a stopping rule by $\u03f5$. The value of $p$ determining the OSP($p$) in (89) or (90) is denoted by ${\text{VD}}^{\text{OSP}}$. Two criteria are developed to detect the abrupt change of the OSP($p$) value. One that is based on the gradient, denoted by “$\nabla $”, is defined by

$$\begin{array}{rcll}{\text{VD}}_{\text{algorithm}}^{\text{OSP},\nabla}\left(\u03f5\right)& =& arg\left\{\underset{1\le p\le L}{min}\left|\frac{\text{OSP}\left(p+1\right)}{\text{OSP}\left(p\right)}-\frac{\text{OSP}\left(p\right)}{\text{OSP}\left(p-1\right)}\right|\u03f5\right\}.& \text{(91)}\text{}\text{}\end{array}$$

The other is based on the difference and is denoted by minus “-”:

$$\begin{array}{rcll}{\text{VD}}_{\text{algorithm}}^{\text{OSP},-}\left(\u03f5\right)& =& arg\left\{\underset{1\le p\le L}{min}\left|\text{OSP}\left(p+1\right)-\text{OSP}\left(p\right)\right|\u03f5\right\}.& \text{(92)}\text{}\text{}\end{array}$$

For the difference criterion, the sample mean vector $\mu $ is normalized before orthogonal projection so that the values of threshold $\u03f5$ for these two criteria are comparable for analysis. The threshold $\u03f5$ in (91) and (92) is generally selected according to a sudden drop or a clear gap between two consecutive values of $p$ in plots of the gradient in (91) and difference (92) versus the value of $p$.

It should be noted the above ${\text{VD}}^{\text{OSP}}$ definitions involve two key parameters: one is the error threshold $\u03f5$,and the other is the algorithm used to produce the $p$-signal matrix ${S}_{p}$, which is not specified in (91) and (92). We shall use “VCA” for “algorithm” at least initially, but there are other choices including target-specific algorithms such as ATGP, as well as generic SVD. See ([2, 5.4.1] for further discussion.

### 4 Vertex Component Analysis

Vertex Component Analysis is an “unsupervised training” derived-endmember classification scheme. We use the notation of [6, Nascimento and Dias], which is slightly different than that used in previous sections.

#### 4.1 VCA Algorithm

Assume linear mixing, and let

- $N$ number of pixels in image
- $L$ number of spectral bands recorded at each pixel
- $p$ number of endmembers, $p\le L$, and usually $p\ll L$ for hyperspectral images.
- ${m}_{j}$ is an endmember spectral vector of length $L$, $1\le j\le p$
- ${\gamma}_{i}$ is a scale factor modeling illumination variability due to surface topography at pixel $i$
- ${\alpha}_{i}={\left[{\alpha}_{i1},{\alpha}_{i2},...{\alpha}_{ip}\right]}^{T}$ is the abundance vector containing the fractions of each endmember at pixel $i$. Positivity: ${\alpha}_{ik}\ge 0$ and ${1}_{p}^{T}{\alpha}_{i}=1$ (${\sum}_{k=1}^{p}{\alpha}_{ik}=1$).
- ${x}_{i}$ is the “true” noise-free spectral vector of length $L$ at pixel $i\le N$.
- ${M}_{\text{Lxp}}=\left[{m}_{1},{m}_{2},\dots {m}_{p}\right]$ is an Lxp mixing matrix that maps an abundance vector $\alpha $ to a “true” spectral vector $x$.

Then the recorded spectral vector at pixel $i$ may be given by

$$\begin{array}{rcll}{r}_{i}& =& {x}_{i}+{n}_{i}=M{\gamma}_{i}{\alpha}_{i}+{n}_{i}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}i=1,\dots N& \text{(93)}\text{}\text{}\\ & =& {x}_{i}+{n}_{i}=M{s}_{i}+{n}_{i}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}i=1,\dots N& \text{(94)}\text{}\text{}\\ {s}_{i}& \equiv & {\gamma}_{i}{\alpha}_{i}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\text{scaledabundancevector}& \text{(95)}\text{}\text{}\end{array}$$

Our goal is to find a abundance vectors $\left\{{\alpha}_{i},i=1,\dots N\right\}$ corresponding to some endmember set $\left\{{m}_{j},j=1,\dots p\right\}$. An appropriate endmember set $\left\{m\right\}$ is to be determined as part of the VCA algorithm. Endmember identification – matching one or more of the VCA generated endmembers to actual specimen spectral samples such as from USGS spectral library or field ground-truth sampling – may be done in subsequent processing step(s).

Since the set $\left\{\alpha \in {\Re}^{p}\phantom{\rule{3.33234pt}{0ex}}:\phantom{\rule{3.33234pt}{0ex}}{1}^{T}\alpha =1,\alpha \succcurlyeq 0\right\}$ is a simplex, the set ${S}_{z}=\left\{z\in {\Re}^{L}\phantom{\rule{3.33234pt}{0ex}}:\phantom{\rule{3.33234pt}{0ex}}z=M\alpha ,\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}{1}^{T}\alpha =1,\alpha \succcurlyeq 0;\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\gamma \ge 0\right\}$ is also a simplex. However, even assuming $n=0$, the observed vector set belongs to convex cone ${C}_{p}=\left\{r\in {\Re}^{L}\phantom{\rule{3.33234pt}{0ex}}:\phantom{\rule{3.33234pt}{0ex}}r=M\gamma \alpha ,\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}{1}^{T}\alpha =1,\alpha \succcurlyeq 0;\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\gamma \ge 0\right\}$ owing to different scale factors $\gamma $.

But the projective projection of the convex cone ${C}_{p}$ onto a properly chosen hyperplane is a simplex with vertices corresponding to the vertices of simplex ${S}_{x}$. The simplex ${S}_{p}=\left\{y\in {\Re}^{L}\phantom{\rule{3.33234pt}{0ex}}:\phantom{\rule{3.33234pt}{0ex}}y=r\u2215\left({r}^{T}u\right),\phantom{\rule{3.33234pt}{0ex}}r\in {C}_{p}\right\}$ is the projective projection of the convex cone ${C}_{p}$ onto the hyperplane ${r}^{T}u=1$, where the choice of $u$ assures there are no observed vectors $r$ orthogonal to the hyperplane: ${r}_{i}^{T}u\u2215\left|u\right|<\left|{r}_{i}\right|,\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}i=1,...N$.

Geometry Note : I’m going to suggest there’s just a shade more going on here. Suppose we neglect noise and normalize all the pixel values such that $\left|{r}_{i}\right|\equiv 1$, which is easy enough to do. Consider a three band image, such as an rgb color image. the $\left\{{r}_{i}\right\}$ are now neatly mapped onto the surface of the first (all axis non-negative) octant of the rgb unit sphere – a pixel’s intensity of any color cannot be negative. For convenience of visualization, assume we may choose $u=\left(1,1,1\right)\u2215\sqrt{3}$ which, if there were pure pixels lying on each unit axis ie (1,0,0), (0,1,0), (0,0,1) assures the simplex of points projected onto the $u$ plane is a pure equilateral triangle whose verteces correspond to the “1” values on the rgb unit axis.

But real photos rarely if ever span the complete rgb bandwidth. Their normizations will usually map to some sub-patch of that first unit octant. Their projections onto the $u$ plane will still form simplex, but will it have no more vertices than a triangle? The VCA algorithm asserts “no”, and starts by finding the vertex of that projected simplex on the $u$ plane.

##### 4.1.1 SNR

VCA algorithm accuracy is dependent on image SNR. Several ways to estimate:

- Classic definition: SNR = Power(signal)/Power(noise). Select homogeneous blocks from image, areas where the image is expected to be flat so that variations are due to noise. Calculation of data SNR using a (Mean)/(Standard Deviation) method for a homogeneous target. ${a}_{i}=1\u2215{N}_{b}{\sum}_{k=1}^{{N}_{b}}{r}_{ik}$ is the average intensity of the ith spectral band in the block with ${N}_{b}$ pixels. ${\sigma}_{i}=\sqrt{1\u2215{N}_{b}{\sum}_{k=1}^{{N}_{b}}{\left({r}_{ik}-{a}_{i}\right)}^{2}}$ is the standard deviation. Then $SN{R}_{i}={a}_{i}\u2215{\sigma}_{i}$. We may manually select homogeneous regions from the image, such as still water (lakes, ponds) or snowfields, or we may use GRASS i.cluster/i.maklik to (perhaps) find such regions automatically.
- VCA does an image SVD or PCA as an initial dimensionality reduction step. We might try looking at the ratio of the Power in the SVD (and/or PCA) components with large singular values – the ones of interest – with the Power in the components with small singular values. Compare this with the ratio of the sum of the large eigenvalues with the sum of the small eigenvalues – or sums of squares of each, to see if there’s an easy way to compute an “effective SNR” that is usable to select between SVD and PCA in the first step of the VCA algorithm.
- Compare the above with sensor data e.g. AVRIS .rcc files Aviris has very high SNR, over 500 on a good day[5, Sec. 3.3] Satellites e.g. Hyperion might have a tenth this, SNR is an important consideration and worth some synthetic image modelling. Nascimento and Dias use $1{0}^{1.5}L$ for their ${\text{SNR}}_{\text{th}}$ and SNR estimator. We note with an SNR greater than 500 (on a good day), AVIRIS is considered extemely good.[5, Kruse JPL 2002] But with 224 bands it will spectacularly fail this particular test, as $1{0}^{1.5}\times 224=31.62\times 224=7083$. We speculate this test is insufficent for any hyperspectral image, and something better related to the number of significant principal components (usually d) might be in order. Nascimento and Dias have quite a bit on the topics of estimating SNR and the virtual dimensionality (VD) of an image. See [6, eqs 10 - 13] and C.-I Chang and Q. Du, “Estimation of number of spectrally distinct signal sources in hyperspectral imagery”, IEEE Trans. Geosci. Remote Sens., vol. 42, no. 3, pp. 608-619, Mar. 2004. [1, C.-I Chang and Du].

##### 4.1.2 Algorithm 1: Vertex Component Analyis (VCA)

Inputs:

- ${R}_{\text{LxN}}\equiv \left[{r}_{1},{r}_{2},\dots {r}_{N}\right]$ is the image matrix of $N$ pixel vectors ${r}_{n}$ each of length $L$ spectral bands.
- $p$, desired dimensionality of problem, the number of endmembers. $p$ may be estimated as the number of “large” eigenvalues of the image covariance matrix. $p\le L$.
- ${\text{SNR}}_{\text{th}}$ is the fixed threshold value $1{0}^{1.5}L$ of the SNR measured wrt the signal subspace. Different values might be used for different SNR estimators. See above.

Outputs:

- $\widehat{M}$ is the $L\times p$ estimated mixing matrix returned by VCA.

Notations:

- ${\left[\widehat{M}\right]}_{:,j}$ is the jth column of $\widehat{M}$
- ${\left[\widehat{M}\right]}_{:,i:k}$ are the ith through kth columns of $\widehat{M}$
- ${\left[X\right]}_{:,\left[index\right]}$ is the matrix formed from the columns of $X$ listed in index vector $\left[index\right]$
- ${A}^{+}$ denotes the pseudo-inverse of
pxp matrix $A$ as computed
by SVD. ${A}^{+}$ is what you think
it is. Write the SVD as $A=U\Sigma {V}^{T}$
where $\Sigma $
is the pxp diagonal matrix whose diagonal elements are the (ordered) singular values:
$\Sigma =\text{diag}\left\{{w}_{1},{w}_{2},\dots {w}_{p}\right\}$ Then the pseudo-inverse
may be written ${A}^{+}=V{\Sigma}^{+}{U}^{T}$ where
${\Sigma}^{+}=\text{diag}\left\{{w}_{1}^{+},{w}_{2}^{+},\dots {w}_{p}^{+}\right\}$ and the pseudo-inverse
of any scalar ${w}^{+}=0$
if $w\simeq 0$ and
${w}^{+}={w}^{-1}$ otherwise.
Thus ${A}^{+}={A}^{-1}$
if $A$ is
non-singular. If $P=A{A}^{+}$
and $Q={A}^{+}A$,
then
- $P$ is the orthogonal projector onto the range of $A$
- $Q$ is the orthogonal projector onto the range of ${A}^{T}$
- $\left(I-P\right)$ is the orthogonal projector onto the kernel (null-space) of ${A}^{T}$
- $\left(I-Q\right)$ is the orthogonal projector onto the kernel (null-space) of $A$
- $P=A{A}^{+}=U\Sigma {V}^{T}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}V{\Sigma}^{+}{U}^{T}=U\Sigma {\Sigma}^{+}{U}^{T}$ and $\Sigma {\Sigma}^{+}=\text{diag}\left\{1,1,\dots 0,\dots \right\}$ has d non-zero diagonal entries, $d\le p$. $P$ is a full pxp symetric matrix, none of whose components contain values from lower right p-d block of $U$. If p=d then $P$ and $Q$ are the identity matrix.

See http://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse In particular “ the pseudoinverse for matrices related to can be computed by applying the ShermanMorrisonWoodbury formula to update the inverse of the correlation matrix, which may need less work. In particular, if the related matrix differs from the original one by only a changed, added or deleted row or column, incremental algorithms[12][13] exist that exploit the relationship.” – a fact that might be quite useful here.

- ${U}_{d}$ is an Lxd matrix whose columns are the first d eigenvectors of either ${R}^{T}R$ (which is LxL) or the LxL spectral covariance matrix $C$. ${U}_{d}$ is the left eigenmatrix computed by SVD whose columns are ordered (left to right) in order of decreasing singular value. Its first d columns are the most significant (principal) relative to the rest. We usually select d such that the singular values ${\omega}_{i}:\left\{i>d\right\}$ are in some sense “too small to be significant”.

1: Compute SNR

2: if $\left(\text{SNR}{\text{SNR}}_{\text{th}}\right)$
then

else

$$\begin{array}{rcll}d& :=& p-1;& \text{(100)}\text{}\text{}\\ \phantom{\rule{3.33234pt}{0ex}}{\left[X\right]}_{:,j}& :=& {U}_{d}^{T}\left({\left[R\right]}_{:j}-{\stackrel{\u0304}{r}}_{j}\right);\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}{U}_{d}\text{obtainedbyPCA}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}R\text{isLxNand}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}X\text{isdxN}& \text{(101)}\text{}\text{}\\ c& :=& arg\underset{j=1\dots N}{max}\parallel {\left[X\right]}_{:,j}\parallel ;& \text{(102)}\text{}\text{}\\ c& :=& \left[c,c\dots c\right];\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}c\text{isa1xNvector}& \text{(103)}\text{}\text{}\\ \phantom{\rule{3.33234pt}{0ex}}Y& :=& \left[\stackrel{X}{c}\right]& \text{(104)}\text{}\text{}\end{array}$$

Here ${\stackrel{\u0304}{r}}_{j}$ is the
sample mean of ${\left[R\right]}_{:j}$,
for $j=1,\dots ,N$.

end if

14: $\phantom{\rule{3.33234pt}{0ex}}A:=\left[0,0,\dots 0\right];\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}$ initialize
$A$, a pxp auxiliary matrix,
and its pseudo inverse ${A}^{+}$

$\phantom{\rule{3.33234pt}{0ex}}\text{Xcor}:=\text{dxdspectralcorrelationmatrixof}X\text{}$, and
Uxcor, its SVD eigenvectors.

15: for ($i=1\text{to}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}p$) do

22: end for

Stopping Criteria: if $p$
in 15: above has been pre-selected e.g. by HFC as described in section (3.1), we’re done. But if OSP (or
similar) is desired, there are other tests that might be applied.

23: if $\left(\text{SNR}{\text{SNR}}_{\text{th}}\right)$
then

24: $\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\widehat{M}:={U}_{d}{\left[X\right]}_{:,\left[indice\right]};\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\widehat{M}$ is a
$L\times p$
estimated mixing matrix.

25: else

26: $\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\widehat{M}:={U}_{d}{\left[X\right]}_{:,\left[indice\right]}+\stackrel{\u0304}{r};\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\phantom{\rule{3.33234pt}{0ex}}\widehat{M}$ is a
$L\times p$
estimated mixing matrix.

27: end if

##### 4.1.3 VCA Programming Notes

- There’s a fair bit of SVD here, we’ll use LAPACK routines LAPACKE_sgesvd(..) and friends. See http://www.netlib.org/lapack/lug/node32.html
- local implementations are in /usr/lib64/liblapacke.so and /usr/include/lapacke/lapacke.h. The LAPACK ”e” variant is an Intel implementation. We might eventually compile a custom version with local compiler optimization switches, but this one will get us going. Be sure to -DHAVE_LAPACK_CONFIG_H and use the #included primitive data types.
- Image comparison: do “man compare” – mathematically and visually annotate the difference between an image and its reconstruction. From ImageMagick, looks pretty sophisticated

##### 4.1.4 Icarus Image GUI Notes

Needs:

- Opts.write(output_configfile) Need to be able to save Options configuration at any time with a “save options” button. Likewise will need a “read options” button that is active whenever user wants to read in previous saved values. A separate “Options Editor” window might be appropriate. (Done)
- direct hyperspectral image display in false colors. See ColorDisplayforHyperspectralImagery-TGRS-VIS2.pdf in RemoteSensing/doc. (TBD. current hyperspectral display is true-color only from visible bands)
- overlay on separate high-res USGS aerial photo if available. (Done)
- overlay Various virt_endmembers (virtual endmembers) in false color (TBD. Got to get VCA working for images without pure endmembers first.)
- virt_endmember to chem_endmember (chemical endmembers) conversion e.g. by constreained least-squares from selected spectral libraries (TBD)
- overlay Various chem_endmembers in false color on original false-color image and/or aerial photo. (TBD)
- highlight “weak target” pixels. (TBD)
- Wavelength (nm) to Hue: Hue = (650 - wavelength)*240/(650-475);

Adjust this formula for the different spectral ranges you want for true or false images. http://www.mathworks.com/matlabcentral/answers/17011-color-wave-length-and-hue

http://www.physics.uoguelph.ca/applets/Intro˙physics/refraction/LightRefract.java (Partly done. We only specify a fixed visible range at this point (true color only). - HSV to RGB is complicated but the C code is at
http://www.cs.rit.edu/ ncs/color/t˙convert.html

But its also in Qt: QColor Class, whose methods also include alpha. We may also implement alpha blending in software. See setOverlayImage.cpp and alphablend.h. - QColorMap, QGraphicsScene, QImage, QImageIOHandler, QImageIOPlugin QImageReader QImageWriter
- Qt has convient conversion methods.
- CIE. But what you really want – conversion from spectral power distribution (SPD) to RGB
is at

http://hyperphysics.phy-astr.gsu.edu/hbase/vision/cie.html

Google “CIE color matching functions”

http://jcgt.org/published/0002/02/01/

XYZ to RGB: http://www.easyrgb.com/index.php?X=MATH&H=01#text1 (Done)

### References

[1] C.-I Chang and Q. Du. Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing, 42(3):608–619, March 2004.

[2] Chein-I Chang. Hyperspectral Data Processing: Algorithm Design and Analysis. John Wiley & Sons, 111 River St, Hoboken, NJ 07030, 2013.

[3] David W. Coulter. Remote Sensing Analysis of Alteration Mineralogy Associated with Natural Acid Drainage in the Grizzly Peak Caldera, Sawatch Range, Colorado. PhD thesis, Colorado School of Mines, Golden, Colorado, 2006.

[4] A.A. Green, M. Berman, P. Switzer, and M.D. Graig. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. Journal of Geophysical Reseach, 90:797 – 804, 1988.

[5] Fred A. Kruse. Comparison of aviris and hyperion for hyperspectral mineral mapping. http://w.hgimaging.com/PDF/Kruse_JPL2002_AVIRIS_Hyperion.pdf, 2002.

[6] José M. P. Nascimento and José M. Bioucas Dias. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, 43(4), April 2005.

[7] M.O. Smith, P.E. Johnson, and J.B. Adams. Quantitative determination of mineral types and abundances from reflectance spectra using principal component analysis. IEEE Transactions on Geoscience and Remote Sensing, 36:65 – 74, 1985.

[8] Frank D. van der Meer and Steven M. de Jong. Imaging Spectroscopy. Kluwer Academic Publishers, Dordrecht, Boston, London, 2001.

[9] Frank D. van der Meer, Steven M. de Jong, and W. Bakker. Imaging Spectroscopy: Basic analytical techniques, pages 17–62. Kluwer Academic Publishers, Dordrecht, Boston, London, 2001.

[10] R. A. White. Image mean and covariance: http://dbwww.essc.psu.edu/lasdoc/user/covar.html, 2005.

[11] R. A. White. Karhunen-loeve transformation: http://dbwww.essc.psu.edu/lasdoc/user/karlov.html, 2005.

[12] R. A. White. Search unsupervised training site selection: http://dbwww.essc.psu.edu/lasdoc/user/search.html, 2005.