 Research
 Open Access
 Published:
Target detection performance bounds in compressive imaging
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 205 (2012)
Abstract
This article describes computationally efficient approaches and associated theoretical performance guarantees for the detection of known targets and anomalies from few projection measurements of the underlying signals. The proposed approaches accommodate signals of different strengths contaminated by a colored Gaussian background, and perform detection without reconstructing the underlying signals from the observations. The theoretical performance bounds of the target detector highlight fundamental tradeoffs among the number of measurements collected, amount of background signal present, signaltonoise ratio, and similarity among potential targets coming from a known dictionary. The anomaly detector is designed to control the number of false discoveries. The proposed approach does not depend on a known sparse representation of targets; rather, the theoretical performance bounds exploit the structure of a known dictionary of targets and the distance preservation property of the measurement matrix. Simulation experiments illustrate the practicality and effectiveness of the proposed approaches.
Introduction
The theory of compressive sensing (CS) has shown that it is possible to accurately reconstruct a sparse signal from few (relative to the signal dimension) projection measurements[1, 2]. Though such a reconstruction is crucial to visually inspect the signal, there are many instances where one is solely interested in identifying whether the underlying signal is one of several possible signals of interest. In such situations, a complete reconstruction is computationally expensive and does not optimize the correct performance metric. Recently, CS ideas have been exploited in[3–5] to perform target detection and classification from projection measurements, without reconstructing the underlying signal of interest. In[3, 5], the authors propose nearestneighbor based methods to classify a signal\mathit{f}\in {\mathbb{R}}^{N} to one of m known signals given projection measurements of the form\mathit{y}=\mathit{A}\mathit{f}+\mathit{n}\in {\mathbb{R}}^{K} for K≤N, where\mathit{A}\in {\mathbb{R}}^{K\times N} is a known projection operator and\mathit{n}\sim \mathcal{N}\left(0,{\sigma}^{2}\mathit{I}\right) is the additive Gaussian noise. This model is simple to analyze, but is impractical, since in reality, a signal is always corrupted by some kind of interference or background noise. Extension of the methods in[3, 5] to handle background noise is nontrivial. Though, Duarte et al.[4] provides a way to account for background contamination, it makes a strong assumption that the signal of interest and the background are sparse in bases that are incoherent. This might not always be true in many applications. Recent works on CS[6, 7] allow for the input signal f to be corrupted by some premeasurement noise\mathit{b}\sim \mathcal{N}\left(0,{\sigma}_{b}^{2}\mathit{I}\right) such that one observes y=A(f + b) + n, and study reconstruction performance as a function of the number of measurements, pre and postmeasurement noise statistics and the dimension of the input signal. In this work, however, we are interested in performing target detection without an intermediate reconstruction step. Furthermore, the increased utility of highdimensional imaging techniques such as spectral imaging or videography in applications like remote sensing, biomedical imaging and astronomical imaging[8–15] necessitates the extension of compressive target detection ideas to such imaging modalities to achieve reliable target detection from fewer measurements relative to the ambient signal dimensions.
For example, recent advances in CS have led to the development of new spectral imaging platforms which attempt to address challenges in conventional imaging platforms related to system size, resolution, and noise by acquiring fewer compressive measurements than spatiospectral voxels[16–21]. However, these system designs have a number of degrees of freedom which influence subsequent data analysis. For instance, the singleshot compressive spectral imager discussed in[18] collects one coded projection of each spectrum in the scene. One projection per spectrum is sufficient for reconstructing spatially homogeneous spectral images, since projections of neighboring locations can be combined to infer each spectrum. Significantly more projections are required for detecting targets of unknown strengths without the benefit of spatial homogeneity. We are interested in investigating how several such systems can be used in parallel to reliably detect spectral targets and anomalies from different coded projections.
In general, we consider a broadly applicable framework that allows us to account for background and sensor noise, and perform target detection directly from projection measurements of signals obtained at different spatial or temporal locations. The precise problem formulation is provided below.
Problem formulation
Let us assume access to a dictionary of possible targets of interest\mathcal{D}=\{{\mathit{f}}^{\left(1\right)},{\mathit{f}}^{\left(2\right)},\dots ,{\mathit{f}}^{\left(m\right)}\}, where{\mathit{f}}^{\left(j\right)}\in {\mathbb{R}}^{N} for j=1,…,m is unitnorm. Our measurements are of the form
where

i∈{1,…,M} indexes the spatial or temporal locations at which data are collected;

α_{i}≥0 is a measure of the signaltonoise ratio at location i, which is either known or estimated from observations;

\mathit{\Phi}\in {\mathbb{R}}^{K\times N} for K < N, is a measurement matrix to be specified in Section “Whitening compressive observations”;
{\mathit{b}}_{i}\in {\mathbb{R}}^{N}\sim \mathcal{N}({\mathit{\mu}}_{b},{\mathit{\Sigma}}_{b})• is the background noise vector, and{\mathit{w}}_{i}\in {\mathbb{R}}^{K}\sim \mathcal{N}(0,{\sigma}^{2}\mathit{I}) is the i.i.d. sensor noise.
For example, in the case of spectral imaging{\mathit{f}}_{i}^{\ast} represents the spectrum at the ith spatial location, and in video sequences{\mathit{f}}_{i}^{\ast} represents the vectorized image frame obtained at the ith time interval. In this article we consider the following target detection problems:

(1)
Dictionary signal detection (DSD): Here we assume that each {\mathit{f}}_{i}^{\ast}\in \mathcal{D} for i∈{1,…,M}, and our task is to detect all instances of one target signal {\mathit{f}}^{\left(j\right)}\in \mathcal{D} for some unknown j∈{1,…,m}, i.e., to locate S=\left\{i:{\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(j\right)}\right\}. DSD is useful in contexts in which we know the makeup of a scene and wish to focus our attention on the locations of a particular signal. For instance, in spectral imaging, DSD is used to study a scene of interest by classifying every spectrum in the scene to different known classes [11, 22]. In a video setup, DSD could be used to classify video segments to one of several categories (such as news, weather, sports, etc.) by projecting the video sequence to an appropriate feature space and comparing the feature vectors to the ones in a known dictionary [23].

(2)
Anomalous signal detection (ASD): Here, our task is to detect all signals which are not members of our dictionary, i.e., detect S=\left\{i:{\mathit{f}}_{i}^{\ast}\notin \mathcal{D}\right\} (this is akin to anomaly detection methods in the literature which are based on nominal, nonanomalous training samples [24, 25]). For instance, ASD may be used when we know most components of a spectral image and wish to identify all spectra which deviate from this model [26].
Our goal is to accurately perform DSD or ASD without reconstructing the spectral input{\mathit{f}}_{i}^{\ast} from z_{i} for i∈{1,…,M}. Accounting for background is a crucial issue. Typically, the background corresponding to the scene of interest and the sensor noise are modeled together by a colored multivariate Gaussian distribution[27]. However, in our case, it is important to distinguish the two because of the presence of the projection operator Φ. The projection operator acts upon the background spectrum in the same way as on the target spectrum, but it does not affect the sensor noise. We assume that b_{i}and w_{i}are independent of each other, and the prior probabilities of different targets in the dictionary{p}^{\left(j\right)}=\mathbb{P}\left({\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(j\right)}\right) for j∈{1,⋯,m} are known in advance. If these probabilities are unknown, then the targets can be considered equally likely. Given this setup, our goal is to develop suitable target and anomaly detection approaches, and provide theoretical guarantees on their performances.
In this article, we develop detection performance bounds which show how performance scales with the number of detectors in a compressive setting as a function of SNR, the similarity between potential targets in a known dictionary, and their prior probabilities. Our bounds are based on a detection strategy which operates directly on the collected data as opposed to first reconstructing each{\mathit{f}}_{i}^{\ast} and then performing detection on the estimated signals. Reconstruction as an intermediate step in detection may be appealing to end users who wish to visually inspect spectral images instead of relying entirely on an automatic detection algorithm. However, using this intermediate step has two potential pitfalls. First, the Rao–Blackwell theorem[28] tells us that an optimal detection algorithm operating on the processed data (i.e., not sufficient statistics) cannot perform better than an optimal detection algorithm operating on the raw data. In other words, optimal performance is possible on the raw data, but we have no such performance guarantee for the reconstructed signals. Second, the relationship between reconstruction errors and detection performance is not well understood in many settings. Although we do not reconstruct the underlying signals, our performance bounds are intimately related to the signal resolution needed to achieve the signal diversity present in our dictionary. Since we have many fewer observations than the signals at this resolution, we adopt the “compressive” terminology.
Performance metric
To assess the performance of our detection strategies, we consider the false discovery rate (FDR) metric and related quantities developed for multiple hypothesis testing problems[29]. Since we collect M independent observations of potentially different signals, we are simultaneously conducting M hypothesis tests when we search for targets. Unlike the probability of false alarm, which measures the probability of falsely declaring a target for a single test, the FDR measures the fraction of declared targets that are false alarms, that is, it provides information about the entire set of M hypotheses instead of just one. More formally, the FDR is given by,
where V is the number of falsely rejected null hypotheses, and R is the total number of rejected null hypotheses. Controlling the FDR in a multiple hypothesis testing framework is akin to designing a constant false alarm rate (CFAR) detector in spectral target detection applications that keeps the false alarm rate at a desired level irrespective of the background interference and sensor noise statistics[22].
Previous investigations
Much of the classical target detection literature[30–34] assume that each target lies in a Pdimensional subspace of{\mathbb{R}}^{N} for P < N. The subspace in which the target lies is often assumed to be known or specified by the user, and the variability of the background is modeled using a probability distribution. Given knowledge of the target subspace, background statistics and sensor noise statistics, detection methods based on LRTs (likelihood ratio tests) and GLRTs (generalized likelihood ratio tests) have been proposed in[30–35]. A subspace model is optimal if the subspace in which targets lie is known in advance. However, in many applications, such subspaces might be hard to characterize. An alternative, and a more flexible option is to assume that the highdimensional target exhibits some lowdimensional structure that can be exploited to perform efficient target detection. This approach is utilized in this work and in[5] where the target signal in{\mathbb{R}}^{N} is assumed to come from a dictionary of m known signals such that m≪N, and in[3], where the targets are assumed to lie in a lowdimensional manifold embedded in highdimensional target space.
Recently, several methods for target or anomaly detection that rely on recovering the full spatiospectral data from projection measurements[36, 37] have been proposed. However, they are computationally intensive and the detection performance associated with these reconstructions is unknown. Other researchers have exploited CS to perform target detection and classification without reconstructing the underlying signal[3–5]. Duarte et al.[4] propose a matching pursuit based algorithm, called the incoherent detection and estimation algorithm (IDEA), to detect the presence of a signal of interest against a strong interfering signal from noisy projection measurements. The algorithm is shown to perform well on experimental data sets under some strong assumptions on the sparsity of the signal of interest and the interfering signal. Davenport et al.[3] develop a classification algorithm called the smashed filter to classify an image in{\mathbb{R}}^{N} to one of m known classes from K projections of the signal, where K < N. The underlying image is assumed to lie on a lowdimensional manifold, and the algorithm finds the closest match from the m known classes by performing a nearest neighbor search over the m different manifolds. The projection measurements are chosen to preserve the distances among the manifolds. Though Davenport et al.[3] offers theoretical bounds on the number of measurements necessary to preserve distances among different manifolds, it is not clear how the performance scales with K or how to incorporate background models into this setup. Moreover, this approach may be computationally intensive since it involves learning and searching over different manifolds. Haupt et al.[5] use a nearestneighbor classifier to classify an Ndimensional signal to one of m equally likely target classes based on K < N random projections, and provide theoretical guarantees on the detector performance. While the method discussed in[5] is computationally efficient, it is nontrivial to extend to the case of target detection with colored background noise and nonequiprobable targets. Furthermore, their performance guarantees cannot be directly extended to our problem since we focus on error measures that let us analyze the performance of multiple hypothesis tests simultaneously as opposed to the above methods that consider compressive classification performance for a single hypothesis test.
The authors of a more recent work[38] extend the classical RX anomaly detector[39] to directly detect anomalies from random, orthonormal projection measurements without an intermediate reconstruction step. They numerically show how the detection probability improves as a function of the signaltonoise ratio when the number of measurements changes. Though probability of detection is a good performance measure, in many applications controlling the false discoveries below a desired level is more crucial. As a result, in our work, we propose an anomaly detection method that controls the FDR below a desired level.
Contributions
This article makes the following contributions to the above literature:

A compressive target detection approach, which (a) is computationally efficient, (b) allows for the signal strengths of the targets to vary with spatial location, (c) allows for backgrounds mixed with potential targets, (d) considers targets with different a priori probabilities, and (e) yields theoretical guarantees on detector performance. This article unifies preliminary work by the authors[40, 41], presents previously unpublished aspects of the proofs, and contains updated experimental results.

A computationally efficient anomaly detection method that detects anomalies of different strengths from projection measurements and also controls the FDR at a desired level.

A whitening filter approach to compressive measurements of signals with background contamination, and associated analysis leading to bounds on the amount of background to which our detection procedure is robust.
The above theoretical results, which are the main focus of this article, are supported with simulation studies in Section “Experimental results”. Classical detection methods described in[22, 26, 27, 30–35, 39, 42–45] do not establish performance bounds as a function of signal resolution or target dictionary properties and rely on relatively direct observation models which we show to be suboptimal when the detector size is limited. The methods in[3, 4] do not contain performance analysis, and our analysis builds upon the analysis in[5] to account for several specific aspects of the compressive target detection problem.
Whitening compressive observations
Before we present our detection methods for DSD and ASD problems, respectively, we briefly discuss a whitening step that is common to both our problems of interest.
Let us suppose that there are enough background training data available to estimate the background mean μ_{b} and covariance matrix Σ_{b}. We can assume without loss of generality that μ_{ b }=0 since Φ μ_{ b } can be subtracted from y. Given the knowledge of the background statistics, we can transform the background and sensor noise model\mathit{\Phi}{\mathit{b}}_{i}+{\mathit{w}}_{i}\sim \mathcal{N}(0,\mathit{\Phi}{\mathit{\Sigma}}_{b}{\mathit{\Phi}}^{T}+{\sigma}^{2}\mathit{I}) discussed in (1) to a simple white Gaussian noise model by multiplying the observations z_{i}, i∈{1,…,M}, by the whitening filterC_{ Φ }≜(Φ Σ_{b}Φ^{T} + σ^{2}I)^{−1/2}. This whitening transformation reduces the observation model in (1) to
where
and{\mathit{n}}_{i}={C}_{\mathit{\Phi}}\left(\mathit{\Phi}{\mathit{b}}_{i}+{\mathit{w}}_{i}\right)\sim \mathcal{N}(0,\mathit{I}). To verify that{\mathit{n}}_{i}\sim \mathcal{N}(0,\mathit{I}), observe that
We can now choose Φ so that the corresponding A has certain desirable properties as detailed in Sections “Dictionary signal detection” and “Anomalous signal detection”.
For a given A, the following theorem provides a construction of Φ that satisfies (3) and a bound on the maximum tolerable background contamination:
Theorem 1
Let B=I−A Σ_{b}A^{T}. If the largest eigenvalue of Σ_{b} satisfies
where ∥A∥ is the spectral norm of A, then B is positive definite and Φ=σB^{−1/2}A is a sensing matrix, which can be used in conjunction with a whitening filter to produce observations modeled in (2).
The proof of this theorem is provided in Appendix 1. This theorem draws an interesting relationship between the maximum background perturbation that the system can tolerate and the spectral norm of the measurement matrix, which in turn varies with K and N. Hardware designs such as those in[17, 19] use spatial light modulators and digital micro mirrors, which allow the measurement matrix Φ to be adjusted easily in response to changing background statistics and other operating conditions.
In the sections that follow, we consider collecting measurements of the form{\mathit{y}}_{i}={\alpha}_{i}\mathit{A}{\mathit{f}}_{i}^{\ast}+{\mathit{n}}_{i} given in (2), where{\mathit{f}}_{i}^{\ast} is the target of interest for i=1,…,M, and\mathit{A}\in {\mathbb{R}}^{K\times N} is a sensing matrix that satisfies (3). It is assumed that any background contamination has been eliminated with the whitening procedure described in this section.
Dictionary signal detection
Suppose that the end user wants to test for the presence of one known target versus the rest, but it is not known a priori which target from\mathcal{D} the user wants to detect. In this case, let us cast the DSD problem as a multiple hypothesis testing problem of the form
where{\mathit{f}}^{\left(j\right)}\in \mathcal{D} is the target of interest and i=1,…,M.
Decision rule
We define our decision rule corresponding to target{\mathit{f}}^{\left(j\right)}\in \mathcal{D} in terms of a set of significance regions{\Gamma}_{i}^{\left(j\right)} such that one rejects the ith null hypothesis if its test statistic y_{i} falls in the ith significance region. Specifically,{\Gamma}_{i}^{\left(j\right)} is defined according to
where\text{log}\phantom{\rule{0.3em}{0ex}}\mathbb{P}\left({\mathit{f}}_{i}^{\ast}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\mathit{f}}^{\left(j\right)}\left\right.{\mathit{y}}_{i},{\alpha}_{i},\mathit{A}\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\frac{K}{2}log\left(\frac{1}{2\Pi}\right)\frac{{\u2225{\mathit{y}}_{i}{\alpha}_{i}\mathit{A}{\mathit{f}}^{\left(j\right)}\u2225}^{2}}{2}+\phantom{\rule{0.3em}{0ex}}\text{log}\phantom{\rule{0.3em}{0ex}}{p}^{\left(j\right)} is the logarithm of the a posteriori probability density of the target f^{(j)} at the ith spatial location given the observations y_{i}, the signaltonoise ratio α_{i}and the sensing matrix A, and p^{(j)} is the a priori probability of target class j. Note that the process of determining these decision regions involves a sequence of nearestneighbor calculations, so the computational complexity scales with the number of classes m. In this work, we operate under the assumption that m is much smaller than the dimensionality of the datasets we consider. For example, if we consider spectral images, then the number of objects (signal classes) that make up a scene of interest is often smaller than the number of voxels in the image. This assumption is not unrealistic and has been exploited in earlier work such as[22] and the references therein. In most of the prior work we have surveyed[46, 47], the number of signal classes is less than 35, which doesn’t make our approach intractable.
The decision rule can be formally expressed in terms of the significance regions as follows:
We analyze this detector by extending the positive FDR (pFDR) error measure introduced by Storey to characterize the errors encountered in performing multiple, independent and nonidentical hypothesis tests simultaneously[48]. The pFDR, discussed formally below, is the fraction of falsely rejected null hypotheses among the total number of rejected null hypotheses, subject to the positivity condition that one rejects at least one null hypothesis. The pFDR is similar to the FDR except that the positivity condition is enforced here. In our context, the positivity condition means that we declare at least one signal to be a nontarget, which in turn implies that the scene of interest is composed of more than one object in the case of spectral imaging, or that the scene is not static in the case of video imaging.
Consider a collection of significance regions\mathit{\Gamma}=\left\{{\Gamma}_{i}^{\left(j\right)}:i=1,\cdots \phantom{\rule{0.3em}{0ex}},M\right\}, such that one declares{\mathcal{\mathscr{H}}}_{1i}^{\left(j\right)} if the test statistic{\mathit{y}}_{i}\in {\Gamma}_{i}^{\left(j\right)}. The pFDR for multiple, nonidentical hypothesis tests can be defined in terms of the significance regions as follows:
where
is the number of falsely rejected null hypotheses,
is the total number of rejected null hypotheses, and{\mathbb{E}}_{\left\{E\right\}}=1 if event E is true and 0 otherwise. In our setup, the pFDR corresponds to the expected ratio of the number of missed targets to the number of signals declared to be nontargets subject to the condition that at least one signal is declared to be a nontarget (note that this ratio is traditionally referred to as the positive false nondiscovery rate (pFNR), but is technically the pFDR in this context because of our definitions of the null and alternate hypotheses). The theorem below presents our main result:
Theorem 2
Given observations of the form (2), if one performs multiple, independent, nonidentical hypothesis tests of the form (5) and decides according to (7), then the worstcase pFDR given by pFDR_{max}=max_{j∈{1,…,m}}pFDR^{(j)}(Γ), satisfies the following bound:
where
The proof of this theorem is detailed in Appendix 2. A key element of our proof is the adaptation of the techniques from[48] to nonidentical independent hypothesis tests.
An achievable bound on the worstcase pFDR
Theorem 2 in the preceding section shows that, for a given A, the worstcase pFDR is bounded from above by a function of the worstcase misclassification probability. In this section, we use this theorem to establish an achievable bound on the worstcase pFDR that explicitly depends on the number of observations K, signal strengths{\left\{{\alpha}_{i}\right\}}_{i=1}^{M}, similarity among different targets of interest, and a priori target probabilities.
Let us first define the quantities
Then we have the following theorem, whose proof is given in Appendix 3:
Theorem 3
Let λ_{max} denote the largest eigenvalue of Σ_{b}. For a given 0 < ε < 1−p_{max}, assume that K and N are sufficiently large so that the following conditions hold:
Then there exists a K×N sensing matrix A that satisfies the condition of Theorem 1, and for which
This result has the following implications and consequences:

(1)
For a given N, the upper bound (13b) on λ_{max}increases as K increases, which implies that the system can tolerate more background perturbation if we collect more measurements.

(2)
The pFDR bound (14) decays with the increase in the values of K, d_{min}and α_{min}, and increases as p_{min}decreases. For a fixed p_{max}, p_{min}, α_{min}and d_{min}, the bound in (14) enables one to choose a value of K to guarantee a desired pFDR value.

(3)
The dominant part of the bound (14) is independent of N, and is only a function of K, p_{max}, p_{min}, α_{min}, and d_{min}. The lack of dependence on N is not unexpected. Indeed, when we are interested in preserving pairwise distances among the members of a fixed dictionary of size m, the Johnson–Lindenstrauss lemma [49] says that, with high probability, K=\mathcal{O}\left(\text{log}\phantom{\rule{0.3em}{0ex}}m\right) random Gaussian projections suffice, regardless of the ambient dimension N. This is precisely the regime we are working with here.

(4)
The bound on K given in (13c) increases logarithmically with the increase in the difference between p_{max} and p_{min}. This is to be expected since one would need more measurements to detect a less probable target as our decision rule weights each target by its a priori probability. If all targets are equally likely, then p_{max}=p_{min}=1/m, and K=\mathcal{O}\left(\text{log}\phantom{\rule{0.3em}{0ex}}m\right) is sufficient provided {\alpha}_{\mathrm{\text{min}}}^{2}{d}_{\mathrm{\text{min}}}^{2} is sufficiently large such that
\begin{array}{l}\phantom{\rule{12.0pt}{0ex}}\text{log}\phantom{\rule{0.3em}{0ex}}\left(1+\frac{{\alpha}_{\mathrm{\text{min}}}^{2}{d}_{\mathrm{\text{min}}}^{2}}{4K}\right)>\phantom{\rule{0.3em}{0ex}}\text{log}\phantom{\rule{0.3em}{0ex}}\left(1+\frac{{\alpha}_{\mathrm{\text{min}}}^{2}{d}_{\mathrm{\text{min}}}^{2}}{4N}\right)>1\end{array}
(where the first inequality holds since K < N). In addition, the lower bound on K also illustrates the interplay between the signal strength of the targets, the similarity among different targets in\mathcal{D}, and the number of measurements collected. A small value of d_{min} suggests that the targets in\mathcal{D} are very similar to each other, and thus α_{min}and K need to be high enough so that similar targets can still be distinguished. The experimental results discussed in Section “Experimental results” illustrate the tightness of the theoretical results discussed here.
Inspection of the proof shows that if A is generated according to a Gaussian distribution, then the conditions of Theorem 3 will be met with high probability.
Extension to a manifoldbased target detection framework
The DSD problem formulation in Section “ASD problem formulation” is accurate if the signals in the dictionary are faithful representations of the target signals that we observe. In reality, however, the target signals will differ from the dictionary signals owing to the differences in the experimental conditions under which they are collected. For instance, in spectral imaging applications, the observed spectrum of any material will not match the reference spectrum of the same material observed in a laboratory because of the differences in atmospheric and illumination conditions. To overcome this problem, one could form a large dictionary to account for such uncertainties in the target signals and perform target detection according to the approaches discussed in Sections “Whitening compressive observations” and “Dictionary signal detection”. A potential drawback with this approach is that our theoretical performance bound increases with the size of\mathcal{D} through p_{min} and d_{min}. Instead, one could reasonably model the target signals observed under different experimental conditions to lie in a lowdimensional submanifold of the highdimensional ambient signal space as shown to be true for spectral images in[50]. We can exploit this result to extend our analysis to a much broader framework that accounts for uncertainties in our dictionary.
Let us consider a dictionary of manifolds{\mathcal{D}}_{\mathcal{\mathcal{M}}}=\left\{{\mathcal{\mathcal{M}}}^{\left(1\right)},\dots ,{\mathcal{\mathcal{M}}}^{\left(m\right)}\right\} corresponding to m different target classes, and that{\mathit{f}}_{i}^{\ast} for i∈{1,…,M} is in one of the manifolds in{\mathcal{D}}_{\mathcal{\mathcal{M}}}. Considering an observation model of the form given in (2), our goal is to determine\left\{i:{\mathit{f}}_{i}^{\ast}\in {\mathcal{\mathcal{M}}}^{\left(j\right)}\right\}, where j∈{1,…,m} is the target class of interest. Let us assume that all target classes are equally likely to keep the presentation simple, though the analysis extends to the case where the targets classes have different a priori probabilities. Suppose that we collect independent sets of measurements{\left\{{\mathit{y}}_{i}\right\}}_{i=1}^{M} and{\left\{{\stackrel{~}{\mathit{y}}}_{i}\right\}}_{i=1}^{M}. Then, we can use the following twostep procedure to extend our DSD method to this manifoldbased framework:

(1)
Given {y _{i}}, form a datadependent dictionary {\mathcal{D}}_{{\mathit{y}}_{i}}=\left\{{\stackrel{~}{\mathit{f}}}_{i}^{\left(1\right)},\dots ,{\stackrel{~}{\mathit{f}}}_{i}^{\left(m\right)}\right\} corresponding to each y _{i} by finding its nearestneighbor in each manifold:
{\stackrel{~}{\mathit{f}}}_{i}^{\left(\ell \right)}={\text{arg}\text{max}}_{\mathit{f}\in {\mathcal{\mathcal{M}}}^{\left(\ell \right)}}\mathbb{P}\left(\left(\right)close="">{\mathit{y}}_{i}{\mathit{f}}_{i}^{\ast}=\mathit{f},{\alpha}_{i},\mathit{A}\right)\nfor ℓ∈{1,…,m} and i=1,…,M.

(2)
Given \left\{{\stackrel{~}{\mathit{y}}}_{i}\right\} and corresponding \left\{{\mathcal{D}}_{{\mathit{y}}_{i}}\right\}, find
{\hat{\mathit{f}}}_{i}={\text{arg}\text{max}}_{\stackrel{~}{\mathit{f}}\in {\mathcal{D}}_{{\mathit{y}}_{i}}}\mathbb{P}\left(\left(\right)close="">{\stackrel{~}{\mathit{y}}}_{i}{\mathit{f}}_{i}^{\ast}=\stackrel{~}{\mathit{f}},{\alpha}_{i},\mathit{A}\right)\n
and declare that the ith observed spectrum corresponds to class j if{\hat{\mathit{f}}}_{i}={\stackrel{~}{\mathit{f}}}_{i}^{\left(j\right)}.
This twostep procedure is studied in[3] for the case\left\{{\mathit{y}}_{i}\right\}=\left\{{\stackrel{~}{\mathit{y}}}_{i}\right\} where the authors provide bounds on the number of projection measurements needed to preserve distances among manifolds. However, they do not offer associated target detection performance guarantees. Our analysis and the theoretical performance bounds extend directly to this framework, if we collect two sets of observations as discussed above. Specifically, the hypothesis tests corresponding to the second step can be written as
where{\stackrel{~}{\mathit{f}}}_{i}^{\left(j\right)}\in {\mathcal{D}}_{{\mathit{y}}_{i}} for i=1,…,M. Since the dictionary in this case changes with i, these tests are nonidentical. This is another instance where our extension of pFDRbased analysis towards simultaneous testing of multiple, independent, and nonidentical hypothesis tests (8) is very significant. Following the proof techniques discussed in the appendix, we can straightforwardly show that the bound in (14) in this manifold setting holds with p_{min}=p_{max}=1/m since all target classes are assumed to be equally likely here, and d_{min}=min_{i∈{1,…,M}}d_{i}where
Anomalous signal detection
The target detection approach discussed above assumes that the target signal of interest resides in a dictionary that is available to the user. However, in some applications (such as military applications and surveillance), one might be interested in detecting objects not in the dictionary. In other words, the target signals of interest are anomalous and are not available to the user. In this section, we show how the target detection methods discussed above can be extended to anomaly detection. In particular, we exploit the distance preservation property of the sensing matrix A to detect anomalous targets from projection measurements.
ASD problem formulation
Given observations of the form in (2), we are interested in detecting whether{\mathit{f}}^{\ast}\in \mathcal{D} or f^{∗}is anomalous. Let us write the anomaly detection problem as the following multiple hypothesis test:
where\tau \in \left[0,\sqrt{2}\right) is a userdefined threshold that encapsulates our uncertainty about the accuracy with which we know the dictionary.^{a} In particular, τ controls how different a signal needs to be from every dictionary element to truly be considered anomalous. In the absence of any prior knowledge on the targets of interest, τ can simply be set to zero. The null hypothesis in this setting models the normal behavior, while the alternative hypothesis models the abnormal or anomalous behavior. This formulation is consistent with the literature[26, 38].
Note that the definition of the hypotheses given in (15a) and (15b) matches the definition in (5) for the special case where the dictionary contains just one signal. In this special case, the signal input f^{∗} is in the dictionary under the null hypothesis in both DSD and ASD problem formulations.^{b}
Anomaly detection approach
Our anomaly detection approach and the associated theoretical analysis are based on a “distance preservation” property of A, which is stated formally in (18). We propose an anomaly detection method that controls the FDR below a desired level δ for different background and sensor noise statistics. In other words, we control the expected ratio of falsely declared anomalies to the total number of signals declared to be anomalous. Note that here we work with the FDR as opposed to the pFDR, since it is possible for a scene to not contain any anomalies at all. We let V/R=0 for R=V=0 since one does not declare any signal to be anomalous in this case. In[29], Benjamini and Hochberg discuss a pvalue based procedure, “BH procedure”, that controls the FDR of M independent hypothesis tests below a desired level. Let,
be the test statistic at the ith location. The pvalue can be defined in terms of our test statistic as follows:
where{\stackrel{~}{d}}_{i}={min}_{\mathit{f}\in \mathcal{D}}\parallel {\alpha}_{i}\mathit{A}\left({\mathit{f}}_{i}^{\ast}\mathit{f}\right)+\mathit{n}\parallel and\mathit{n}\sim \mathcal{N}\left(0,\mathit{I}\right) is independent of n_{i}. This is the probability under the null hypothesis, of acquiring a test statistic at least as extreme as the one observed. Let us denote the ordered set of pvalues by p_{(1)}≤p_{(2)}≤⋯≤p_{(M)} and let{\mathcal{\mathscr{H}}}_{\left(0i\right)} be the null hypothesis corresponding to (i)^{th}pvalue. The BH procedure says that if we reject all{\mathcal{\mathscr{H}}}_{\left(0i\right)} for i=1,…,t where t is the largest i for which p_{(i)}≤iδ/M, then the FDR is controlled at δ.
To apply this procedure in our setting, we need to find a tractable expression for the pvalue at every location. This can be accomplished when A satisfies the distancepreservation condition stated below. LetV=\mathcal{D}\bigcup \{\underset{i}{\overset{\ast}{\mathit{f}}}:i\in \{1,\dots ,M\left\}\right\} be the set of all signals in the dictionary and the ones whose projections are measured. Note that V≤M + m. For a given ε∈(0,1), a projection operator\mathit{A}\in {\mathbb{R}}^{K\times N}, K≤N, is distancepreserving on V if the following holds for all u,v∈V:
The existence of such projection operators is guaranteed by the celebrated Johnson and Lindenstrauss (JL) lemma[49], which says that there exists random constructions of A for which (18) holds with probability at least 1−2V^{2}e^{−Kc(ε)}providedK=\mathcal{O}\left(\text{log}\phantom{\rule{0.3em}{0ex}}\leftV\right\right)\le N, where c(ε)=ε^{2}/16−ε^{3}/48[51, 52]. Examples of such constructions are: (a) Gaussian matrices whose entries are drawn from\mathcal{N}(0,1/K), (b) Bernoulli matrices whose entries are\pm 1/\sqrt{N} with probability 1/2, (c) random matrices whose entries are\pm \sqrt{3/N} with probability 1/6 and zero with probability 2/3[51, 52], and (d) matrices that satisfy the restricted isometry property (RIP) where the signs of the entries in each column are randomized[53].
We now state our main theorem that gives a tight upper bound on the pvalue at every location when {α_{i}} are unknown and are estimated from the observations. Let\left\{{\hat{\alpha}}_{i}\right\} be the estimates of {α_{i}} that satisfy
for i=1,…,M where ζ∈[0,1] is a measure of the accuracy of the estimation procedure.
Theorem 4
If the ith hypothesis test is defined according to (15a) and (15b), the projection matrix A satisfies (18) for a given ε∈(0,1), and the estimates\left\{{\hat{\alpha}}_{i}\right\} satisfy (19) for some ζ∈[0,1], then the bound
holds for all i=1,…,M where\mathcal{F}\left(\xb7;K,\nu \right) is the CDF of a noncentral χ^{2}random variable with K degrees of freedom and noncentrality parameter ν[54].
The proof of this theorem is given in Appendix 4. We find the pvalue upper bounds at every location and use the BH procedure to perform anomaly detection. The performance of this procedure depends on the values of K, {α_{i}}, τ and ε. The parameter ε is a measure of the accuracy with which the projection matrix A preserves the distances between any two vectors in{\mathbb{R}}^{N}. A value of ε close to zero implies that the distances are preserved fairly accurately. When {α_{i}} are unknown and estimated from the observations, the performance depends on the accuracy of the estimation procedure, which is reflected in our bounds in (20) through ζ.
One can easily estimate {α_{i}} from {y_{i}} for some choices of A. For instance, if the entries of the projection matrix A are drawn from\mathcal{N}(0,1/K), the {α_{i}} can be estimated using a maximum likelihood estimator (MLE) by exploiting the statistics of the projection matrix and noise. Note that the jth element of the ith measured spectrum is{y}_{i,j}=\sum _{k=1}^{N}{\alpha}_{i}{f}_{i,k}^{\ast}{a}_{j,k}+{n}_{i,j}\sim \mathcal{N}\left(0,\sum _{k=1}^{N}\frac{{\alpha}_{i}^{2}}{K}{{f}_{i,k}^{\ast}}^{2}+1\right) for j∈{1,…,K}. Since{\u2225{\mathit{f}}_{i}^{\ast}\u2225}_{2}=1 according to our problem formulation,{y}_{i,j}\stackrel{\text{i.i.d.}}{\sim}\mathcal{N}\left(0,\frac{{\alpha}_{i}^{2}}{K}+1\right). The MLE of α_{i} given by{\hat{\alpha}}_{i}={\mathrm{\text{arg}}\phantom{\rule{0.3em}{0ex}}\text{max}}_{\alpha}\mathbb{P}\left({\mathit{y}}_{i}\right\mathit{A},\alpha ) then reduces to
In practice, we use{\hat{\alpha}}_{i}=\sqrt{{\left(\parallel {\mathit{y}}_{i}{\parallel}^{2}K\right)}_{+}} where the (a)_{+} =a if a≥0 and 0 otherwise to ensure that ∥y_{i}∥^{2}−K is nonnegative. We can use concentration inequalities to show that with high probability,{\u2225{\mathit{y}}_{i}\u2225}_{2}^{2} is tightly concentrated around its mean\mathbb{E}\left[{\u2225{\mathit{y}}_{i}\u2225}_{2}^{2}\right]={\alpha}_{i}^{2}+K. Since{y}_{i,j}\stackrel{\text{i.i.d.}}{\sim}\mathcal{N}\left(0,\frac{{\alpha}_{i}^{2}}{K}+1\right),\frac{K}{{\alpha}^{2}+K}{\u2225{\mathit{y}}_{i}\u2225}_{2}^{2}\sim {\chi}_{K}^{2}. From ([55], Lemma 2.2), and ([56], Proposition 1 and Remark 1), for any t > 0
for some absolute constants C,c > 0. This result shows that with high probability,{\u2225{\mathit{y}}_{i}\u2225}_{2}^{2}K is nonnegative.
The experimental results discussed in Section “Experimental results” demonstrate the performance of this detector as a function of K, {α_{i}} and τ when {α_{i}} are known and as a function of K, τ and ζ when {α_{i}} are estimated.
Experimental results
In the experiments that follow, the entries of A are drawn from\mathcal{N}(0,1/K).
Dictionary signal detection
To test the effectiveness of our approach, we formed a dictionary\mathcal{D} of nine spectra (corresponding to different kinds of trees, grass, water bodies and roads) obtained from a labeled HyMap (Hyperspectral Mapper) remote sensing data set[57], and simulated a realistic dataset using the spectra from this dictionary. Each HyMap spectrum is of length N=106. We generated projection measurements of these data such that{\mathit{z}}_{i}={\alpha}_{i}\mathit{\Phi}({\mathit{f}}_{i}^{\ast}+{\mathit{b}}_{i})+{\mathit{w}}_{i} according to (1), where{\mathit{w}}_{i}\sim \mathcal{N}(0,{\sigma}^{2}\mathit{I}),{\mathit{f}}_{i}^{\ast}\in \mathcal{D} for i=1,…,8100,{\mathit{b}}_{i}\sim \mathcal{N}\left({\mathit{\mu}}_{\mathit{b}},{\mathit{\Sigma}}_{\mathit{b}}\right) such that Σ_{ b } satisfies the condition in (4), and{\alpha}_{i}={\alpha}_{i}^{\ast}\sqrt{K} where{\alpha}_{i}^{\ast}\sim \mathcal{U}[21,25] and\mathcal{U} denotes uniform distribution. We let σ^{2}=5 and model {α_{i}} to be proportional to\sqrt{K} to account for the fact that the total observed signal energy increases as the number of detectors increases. We transform the z_{i}by a series of operations to arrive at a model of the form discussed in (2), which is{\mathit{y}}_{i}={\alpha}_{i}\mathit{A}{\mathit{f}}_{i}^{\ast}+{\mathit{n}}_{i}. For this dataset, p_{min}=0.04938, p_{max}=0.1481, and d_{min}=0.04341.
We evaluate the performance of our detector (7) on the transformed observations, relative to the number of measurements K, by comparing the detection results to the ground truth. Our MAP detector returns a label{L}_{i}^{\text{MAP}} for every observed spectrum which is determined according to
where m is the number of signals in\mathcal{D}, and p^{(ℓ)} is the a priori probability of target class ℓ. In our experiments we evaluate the performance of our classifier when (a) {α_{i}} are known (AK) and (b) {α_{i}} are unknown (AU) and must be estimated from y, respectively. The empirical pFDR^{(j)}for each target spectrum j is calculated as follows:
where\left\{{L}_{i}^{\mathrm{\text{GT}}}\right\} denote the ground truth labels. The empirical pFDR^{(·)}is the ratio of the number of missed targets to the total number of signals that were declared to be nontargets. The plots in Figure1a show the results obtained using our target detection approach under the AK case (shown by a dark gray dashed line) and the AU case (shown by a light gray dashed line), compared to the theoretical upper bound (shown by a solid line). These results are obtained by averaging the pFDR values obtained over 1000 different noise, sensing matrix and background realizations. Note that theoretical results only apply to the AK case since they were derived under the assumption of {α_{i}} being known. The experimental results are shown for both AK and AU cases to provide a comparison between the two scenarios. In both these cases, the worstcase empirical pFDR curves decay with the increase in the values of K. In the AK case, in particular, the worstcase empirical pFDR curve decays at the same rate as the upper bound. In this experiment, for a fixed α_{min}and d_{min}, we chose K to satisfy (13c). The theory is somewhat conservative, and in practice the method works well even when the values of K are below the bound in (13c).
In the experiment that follows, we let{\alpha}_{i}^{\ast}\sim \mathcal{U}[10,20], where\mathcal{U} denotes a uniform random variable,{\alpha}_{i}=\sqrt{K}{\alpha}_{i}^{\ast} and evaluate the performance of our detector for different values of K that are not necessarily chosen to satisfy (13c). In addition, we also compare the performance of our detection method to that of a MAP based target detector operating on downsampled versions of our simulated spectral input image. The reason behind such a comparison is to show what kinds of measurements yield better results given a fixed number of detectors.
For an input spectrum\mathit{g}\in {\mathbb{R}}^{N}, we let\stackrel{~}{\mathit{g}}\in {\mathbb{R}}^{K} denote its downsampled approximation. Specifically, the j th element of{\stackrel{~}{g}}_{i} is\sum _{\ell =1}^{r}{g}_{(j1)r+\ell} where r=⌈N/K⌉. Let us consider making observations of the form
where{\stackrel{~}{\mathit{g}}}_{i}={\alpha}_{i}{\stackrel{~}{\mathit{f}}}_{i}^{\ast}+{\stackrel{~}{\mathit{b}}}_{i} is the Kdimensional downsampled version of{\mathit{f}}_{i}^{\ast}+{\mathit{b}}_{i} for K≤N,{\mathit{n}}_{i}\sim \mathcal{N}(0,{\sigma}^{2}\mathit{I}) for σ^{2}=5 and c is a constant that is chosen to preserve the mean signaltonoise ratio corresponding to the downsampled and projection measurements. The MAPbased detector operating on the downsampled data returns a label{D}_{i}^{\text{MAP}} for every observed spectrum which is determined according to
whereG={\stackrel{~}{\mathit{\Sigma}}}_{b}+{\sigma}^{2}\mathit{I} and{\stackrel{~}{\mathit{\Sigma}}}_{b} is the covariance matrix obtained from the downsampled versions of the background training data and{\stackrel{~}{\mathit{f}}}^{\left(\ell \right)} is the downsampled version of{\mathit{f}}^{\left(\ell \right)}\in \mathcal{D}. The algorithm declares that target spectrum{\mathit{f}}^{\left(j\right)}\in \mathcal{D} is present in the i th location if{D}_{i}^{\text{MAP}}=j. In order to illustrate the advantages of using a Φ designed according to (24), we compare the performances of the proposed anomaly detector when Φ is chosen to be a random Gaussian matrix whose entries are drawn from\mathcal{N}\left(0,1/K\right) and when Φ is chosen according to (24). Figure1b shows a comparison of the results obtained using the projection measurements obtained using Φ designed according to (24), Φ chosen at random, and the downsampled measurements under the AK case. These results show that the detection algorithm operating on projection measurements using Φ designed using background and sensor noise statistics yield significantly better results than the one operating on the downsampled data, and that the empirical pFDR values in our method decays with K. The improvement in performance using projection measurements comes from the distancepreservation property of the projection operator A. While a Gaussian sensing matrix A preserves distances between any pair of vectors from a finite collection of vectors with high probability[51, 52], downsampling loses some of the fine differences between similarlooking spectra in the dictionary. Furthermore, when Φ is chosen at random, the resulting whitened transformation matrix is not necessarily distancepreserving. This may worsen the performance as illustrated in Figure1b.
Anomaly detection
In this section, we evaluate the performance of our anomaly detection method on (a) a simulated dataset and provide a comparison of the results obtained using the proposed projection measurements and the ones obtained using downsampled measurements, and (b) real AVIRIS (Airborne Visible InfraRed Imaging Spectrometer) dataset.
Experiments on simulated data
We simulate a spectral image f^{∗}composed of 8100 spectra, where each of them is either drawn from a dictionary\mathcal{D}=\{{\mathit{f}}^{\left(1\right)},\cdots \phantom{\rule{0.3em}{0ex}},{\mathit{f}}^{\left(5\right)}\} consisting of five labeled spectra from the HyMap data that correspond to a natural landscape (trees, grass and lakes) or is anomalous. The anomalous spectrum is extracted from unlabeled AVIRIS data, and the minimum distance between the anomalous spectrum f^{(a)} and any of the spectra in\mathcal{D} is{d}_{\mathrm{\text{min}}}={\mathrm{\text{min}}}_{\mathit{f}\in \mathcal{D}}\parallel \mathit{f}{\mathit{f}}^{\left(\mathrm{a}\right)}\parallel =0.5308. The simulated data has 625 locations that contain the anomalous spectrum. Our goal is to find the spatial locations that contain the anomalous AVIRIS spectrum given noisy measurements of the form{\mathit{z}}_{i}=\mathit{\Phi}\left({\alpha}_{i}{\mathit{f}}_{i}^{\ast}+{\mathit{b}}_{i}\right)+{\mathit{w}}_{i} where b_{ i }∼(μ_{ b },Σ_{ b }), Φ is designed according to (24),{\mathit{w}}_{i}\sim \mathcal{N}(0,{\sigma}^{2}\mathit{I}) and{\mathit{f}}_{i}^{\ast}\in \mathcal{D} under{\mathcal{\mathscr{H}}}_{0i}. As discussed in Section “Anomalous signal detection”,{\mathit{f}}_{i}^{\ast} is anomalous under{\mathcal{\mathscr{H}}}_{1i}, and our goal is to control the FDR below a userspecified false discovery level δ. We simulate\left\{{\alpha}_{i}\right\}=\sqrt{K}{\alpha}_{i}^{\ast} where{\alpha}_{i}^{\ast}\sim \mathcal{U}[2,3]. In this experiment we assume the availability of background training data to estimate the background statistics and the sensor noise variance σ^{2}. Given the knowledge of the background statistics, we perform the whitening transformation discussed in Section “Whitening compressive observations” and evaluate the detection performance on the preprocessed observations given by (2).
For a fixed τ=0. 1 and ε=0. 1, we evaluate the performance of the detector as the number of measurements K increases under the AK and AU cases respectively, by comparing the pseudoROC (receiver operating characteristic) curves obtained by plotting the empirical FDR against 1−FNR, where FNR is the false nondiscovery rate. Note that 1−FNR is the expected ratio of the number of null hypotheses that are correctly rejected to the number of declared null hypotheses. The empirical FDR and FNR are computed according to
where p_{ t }is the pvalue threshold such that the BH procedure rejects all null hypotheses for which p_{ i }≤p_{ t }, and the ground truth label{L}_{i}^{\mathrm{\text{GT}}}=0 if the i th spectrum is not anomalous, and 1 otherwise. In this experiment, we consider three different values of K approximately given by K∈{N/6,N/3,N/2} where N=106, and evaluate the performance of our detector for each K. Furthermore, in our experiments with simulated data, we declare a spectrum to be anomalous if d_{ i }≥η where η is a userspecified threshold and d_{ i }is defined in (16). We use the pvalue upper bound in (20) in our experiments with real data where the ground truth is unknown.
We compare the performance of our method to a generalized likelihood ratio test (GLRT)based procedure operating on downsampled data, where we collect measurements of the form in (23) and{\mathit{f}}_{i}^{\ast}\in \mathcal{D} under{\mathcal{\mathscr{H}}}_{0i}. Observe that{\mathit{y}}_{i}{\mathcal{\mathscr{H}}}_{0i}\sim \sum _{\mathit{f}\in \mathcal{D}}\mathbb{P}\left({\mathit{f}}_{i}^{\ast}=\mathit{f}\right)\mathcal{N}({\alpha}_{i}\stackrel{~}{\mathit{f}},{\stackrel{~}{\mathit{\Sigma}}}_{b}+\mathit{I}), where\stackrel{~}{\mathit{f}} refers to the downsampled version of\mathit{f}\in \mathcal{D}. In this experiment we assume that each spectrum in\mathcal{D} is equally likely under{\mathcal{\mathscr{H}}}_{0i} for i=1,…,M. The GLRTbased approach declares the i th spectrum to be anomalous if
for i=1,…,M, where η is a userspecified threshold[26]. While our anomaly detection method is designed to control the FDR below a userspecified threshold, the GLRTbased method is designed to increase the probability of detection while keeping the probability of false alarm as low as possible. To facilitate a fair evaluation of these methods, we compare the pseudoROC curves (FDR versus 1−FNR) and the actual ROC curves (probability of false alarm p_{ f }versus probability of detection p_{ d }) corresponding to these methods obtained by averaging the empirical FDR, FNR, p_{ d } and p_{ f } over 1,000 different noise and sensing matrix realizations for different values of K. We also compare the performance of the proposed method when Φ is chosen according to (24) and when it is chosen at random, as discussed in the previous section. Figure2a,e show the pseudoROC plots and the conventional ROC plots obtained using the GLRTbased method operating on downsampled data when {α_{ i }} are known. Figure2b,f show the results obtained by using a random Gaussian Φ instead of the Φ in (24). Figure2c,g show the pseudoROC plots and the conventional ROC plots obtained using our method when {α_{ i }} are known. These plots show that performing anomaly detection from our designed projection measurements yields better results than performing anomaly detection on downsampled measurements and on measurements obtained using a random Gaussian Φ. This is largely due to the fact that carefully chosen projection measurements preserve distances (up to a constant factor) among pairs of vectors in a finite collection, where as the downsampled measurements fail to preserve distances among vectors that are very similar to each other. Similarly, a random projection matrix Φ is not necessarily distancepreserving postwhitening transformation, which leads to poor performance as illustrated in Figure2b,f. Figure2d,h shows the pseudoROC plots and the conventional ROC plots obtained using our method when {α_{ i }} are unknown, and are estimated from the measurements. Note that the value of ζ decreases as K increases since the estimation accuracy of {α_{ i }} increases with increase in K. These plots show that the performance improves as we collect more observations, and that, as expected, the performance under the AK case is better than the performance under the AU case.
Experiments on real AVIRIS data
To test the performance of our anomaly detector on a real dataset, we consider the unlabeled AVIRIS Jasper Ridge dataset\mathit{g}\in {\mathbb{R}}^{614\times 512\times 197}, which is publicly available from the NASA AVIRIS website,http://aviris.jpl.nasa.gov/html/aviris.freedata.html. We split this data spatially to form equisized training and validation datasets, g^{t} and g^{v} respectively, each of which is of size 128×128×197. Figure3a,b show images of the AVIRIS training and validation data summed through the spectral coordinates. The training data are comprised of a rocky terrain with a small patch of trees. The validation data seems to be made of a similar rocky terrain, but also contain an anomalous lakelike structure. The goal is to evaluate the performance of the detector in detecting the anomalous region in the validation data for different values of K. We cluster the spectral targets in the normalized training data to eight different clusters using the Kmeans clustering algorithm and form a dictionary\mathcal{D} comprising of the cluster centroids. Given the dictionary and the validation data, we find the ground truth by labeling the i th validation spectrum as anomalous if{\mathrm{\text{min}}}_{\mathit{f}\in \mathcal{D}}\u2225\mathit{f}\frac{{\mathit{g}}_{i}^{v}}{\parallel {\mathit{g}}_{i}^{v}\parallel}\u2225>\tau. Since the statistics of the possible background contamination in the data could not be learned in this experiment because of the lack of labeled training data, the dictionary might be background contaminated as well. The parameter τ encapsulates this uncertainty in our knowledge of the dictionary. In this experiment, we set τ=0. 2.
We generate measurements of the form{\mathit{y}}_{i}=\sqrt{K}{\mathit{g}}_{i}^{v}+{\mathit{n}}_{i} for i=1,…,128×128, where{\mathit{n}}_{i}\sim \mathcal{N}(0,\mathit{I}). The\sqrt{K} factor indicates that the observed signal strength increases with K. For a fixed FDR control value of 0.01, Figure3c,d shows the results obtained for K≈N/5 and K≈N/2, respectively. Figure3e shows how the probability of error decays as a function of the number of measurements K. The results presented here are obtained by averaging over 1,000 different noise and sensing matrix realizations. From these results, we can see that the number of detected anomalies increases with K and the number of misclassifications decrease with K.
Conclusion
This work presents computationally efficient approaches for detecting known targets and anomalies of different strengths from projection measurements without performing a complete reconstruction of the underlying signals, and offers theoretical bounds on the worstcase target detector performance. This article treats each signal as independent of its spatial or temporal neighbors. This assumption is reasonable in many contexts, especially when the spatial or temporal resolution is low relative to the spatial homogeneity of the environment or the pace with which a scene changes. However, emerging technologies in computational optical systems continue to improve the resolution of spectral imagers. In our future work we will build upon the methods that we have discussed here to exploit the spatial or temporal correlations in the data.
Appendix 1: Proof of Theorem 1
Using linear algebra and matrix theory, it is possible to show that if B=I−A Σ_{ b }A^{T} is positive definite, then
satisfies (3).^{c} In particular, we can substitute (24) in (3) to verify that the proposed construction of Φ satisfies (3). Observe that C_{ Φ }=(Φ Σ_{ b }Φ^{T} + σ^{2}I)^{−1/2} can be written in terms of (24) as follows:
where the thirdtolast equation follows from the definition of B and (25) follows from the fact that B is symmetric and positive definite. If B is positive definite, then B^{−1} is positive definite as well and can be decomposed as B^{−1}=(B^{−1/2})^{T}B^{−1/2}, where the matrix square root B^{−1/2}is symmetric and positive definite. By substituting (25) and (24) in (3), we have C_{ Φ }Φ=σ^{−1}B^{1/2}σ B^{−1/2}A=A. A sufficient condition for B to be positive definite can be derived as follows.
To ensure positive definiteness of B, we must have
for any nonzero\mathit{x}\in {\mathbb{R}}^{K}. Note that since Σ_{ b } is positive semidefinite, x^{T}(A Σ_{ b }A^{T})x≥0. However, the right hand side of (26) is > 0 only if the spectral norm of A Σ_{ b }A^{T}is < 1, since{\mathit{x}}^{T}\left(\mathit{A}{\mathit{\Sigma}}_{\mathit{b}}{\mathit{A}}^{T}\right)\mathit{x}\le \parallel \mathit{x}{\parallel}^{2}\xb7\parallel \mathit{A}{\mathit{\Sigma}}_{\mathit{b}}{\mathit{A}}^{T}\parallel. The norm of A Σ_{ b }A^{T} is in turn bounded above by
since ∥A∥=∥A^{T}∥ and ∥Σ_{ b }∥=λ_{max}, where λ_{max} is the largest eigenvalue of Σ_{ b }. To ensure ∥A Σ_{ b }A^{T}∥ < 1, ∥A∥^{2}λ_{max} has to be < 1, which leads to the result of Theorem 1.
Appendix 2: Proof of Theorem 2
The proof of Theorem 2 adapts the proof techniques from[48] to nonidentical independent hypothesis tests. We begin by expanding the pFDR definition in (8) as follows:
Observe that R(Γ)=k implies that there exists some subset S_{ k }={u_{1},…,u_{ k }}⊆{1,…,M} of size k such that{\mathit{y}}_{{u}_{\ell}}\in {\Gamma}_{{u}_{\ell}}^{\left(j\right)} for ℓ=1,…,k and{\mathit{y}}_{i}\notin {\Gamma}_{i}^{\left(j\right)} for all i∉S_{ k }. To simplify the notation, let{\Lambda}_{{S}_{k}}=\prod _{u\in {S}_{k}}{\Gamma}_{u}^{j}\times \prod _{\ell \notin {S}_{k}}{\stackrel{~}{\Gamma}}_{\ell}^{\left(j\right)}, where{\stackrel{~}{\Gamma}}_{\ell}^{\left(j\right)} is the complement of{\Gamma}_{\ell}^{\left(j\right)}, denote the significance region that corresponds to set S_{ k }, and T=(y_{1},…,y_{ M }) be a set of test statistics corresponding to each hypothesis test. Considering all such subsets we have
By plugging in the definition of V({Γ_{ i }}) from (9), we have
for all u_{ ℓ }∈S_{ k } since the tests are independent of each other given A. The posterior probability\mathbb{P}\left(\left(\right)close="">{\mathcal{\mathscr{H}}}_{i}^{\left(j\right)}=0{\mathit{y}}_{i}\in {\Gamma}_{i}^{\left(j\right)}\right)\n for the i^{th}hypothesis test can be expanded using Bayes’ rule as
where{\hat{\mathit{f}}}_{i}={\text{arg}\text{max}}_{{\mathit{f}}^{\left(\ell \right)}\in \mathcal{D}}\mathbb{P}\left(\left(\right)close="">{\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(\ell \right)}{\mathit{y}}_{i},{\alpha}_{i},\mathit{A}\right)\n. To upper bound the numerator of (29), consider the probability of misclassification given by{\left({\mathrm{P}}_{\mathrm{e}}\right)}_{i}=\mathbb{P}\left({\hat{\mathit{f}}}_{i}\ne {\mathit{f}}_{i}^{\ast}\right) where{\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(j\right)}\in \mathcal{D}, which can be expanded as follows:
The denominator term in (29) can be expanded as follows:
Observe that\mathbb{P}\left(\left(\right)close="">{\hat{\mathit{f}}}_{i}\ne {\mathit{f}}^{\left(j\right)}{\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(j\right)}\right)\n is nonnegative, and
Thus
Substituting (30) and (31) in (29),
By substituting (32) in (27) and (28) we have:
since\sum _{k=1}^{M}\sum _{{S}_{k}}\mathbb{P}\left(\left(\right)close="">\mathit{T}\in {\Lambda}_{{S}_{k}}R\left(\mathit{\Gamma}\right)0\right)\n. The result of Theorem 2 is obtained by finding an upper bound on the worstcase pFDR given by
where p_{max}=max_{ℓ∈{1,…,m}}p^{(ℓ)}.
Appendix 3: Proof of Theorem 3
The proof is via a random selection technique, similar to random coding arguments common in information theory. Specifically, we will draw a K×N sensing matrix A at random from a particular distribution and then show that, for ε, N, and K satisfying the conditions of the theorem, the probability that the conclusions of the theorem will fail to hold for this randomly chosen A is strictly smaller than unity. This will imply that the conclusions of the theorem must be true for at least one (deterministic) realization of A.
We begin by specifying all the relevant random variables:

{\mathit{f}}_{1}^{\ast},\dots ,{\mathit{f}}_{M}^{\ast} are i.i.d. random variables taking values in the dictionary\mathcal{D}=\{{\mathit{f}}^{\left(1\right)},\dots ,{\mathit{f}}^{\left(m\right)}\} with probabilities{p}^{\left(j\right)}=\text{Pr}\{{\mathit{f}}_{i}^{\ast}={\mathit{f}}^{\left(j\right)}\},j\in \{1,\dots ,m\};

{\mathit{n}}_{1},\dots ,{\mathit{n}}_{M}\stackrel{\text{i.i.d.}}{\sim}\mathcal{N}(0,\mathit{I}) ;

G is a random K×N matrix with i.i.d.\mathcal{N}(0,1) entries.
We assume that{\left\{{\mathit{f}}_{i}^{\ast}\right\}}_{i=1}^{M},{\left\{{\mathit{n}}_{i}\right\}}_{i=1}^{M}, and G are mutually independent, and we will denote by P their joint probability distribution. Finally, we let\mathit{A}=\frac{1}{\sqrt{K}}\mathit{G} and consider the observation model
where α_{1},…,α_{ M } > 0 are the given signal strengths.
We first consider the case when α_{1}=⋯=α_{ M }=α. Given ε, N, and K, we define the following two error events:
where, for each i∈{1,…,M},{\hat{\mathit{f}}}_{i} is defined according to (12). Note that, since we have assumed that the α_{ i }’s are equal and all the pairs({\mathit{f}}_{i}^{\ast},{\mathit{n}}_{i}),i\in \{1,\dots ,M\}, are i.i.d.,
We will now prove that
The union bound gives\mathsf{P}({\mathcal{E}}_{1}\cup {\mathcal{E}}_{2})\le \mathsf{P}\left({\mathcal{E}}_{1}\right)+\mathsf{P}\left({\mathcal{E}}_{2}\right). First, we bound\mathsf{P}\left({\mathcal{E}}_{1}\right). To do that, we use the following concentration result for Gaussian random matrices[58]: for any t≥0,
Lettingt=\epsilon (\sqrt{K}+\sqrt{N}) and using the fact that t^{2}≥(K + N)ε^{2}, we get
Next, we bound\mathsf{P}\left({\mathcal{E}}_{2}\right). To that end, we use the following result, which is a straightforward extension of ([5], Theorem 1) to nonequiprobable dictionary elements:
Lemma 1 (Compressive classification error)
Consider the problem of classifying a signal of interest{\mathit{f}}^{\ast}\in \mathcal{D}=\{{\mathit{f}}^{\left(1\right)},\dots ,{\mathit{f}}^{\left(m\right)}\} to one of m known target classes by making observations of the form y=α A f^{∗} + n where\mathit{n}\sim \mathcal{N}\left(0,{\sigma}^{2}\mathit{I}\right), given the knowledge of the dictionary\mathcal{D}, prior probabilities p^{(j)}for j∈{1,⋯,m}, sensing matrix A, and the noise variance σ^{2}. If the entries of A are drawn i.i.d. from\mathcal{N}\left(0,1/K\right) independently of f^{∗} and n, and the estimate\hat{\mathit{f}} is obtained according to (12), then
where the probability is taken with respect to the distributions underlying f^{∗}, A, and n.
Using the above lemma, we have
Combining (36) and (37), we get (35).
Because of (13a), the righthand side of (35) is less than 1−ε−p_{max}, which is strictly positive by hypothesis. Thus, from the fact that
and from (34), it follows that there exists at least one deterministic choice of the K×N sensing matrix A^{∗}, such that:
where, for a given choice of A, (P_{e})_{max}(A) denotes the maximum probability of error defined in Theorem 2.
Next, from (38a) and (13b) it follows that A^{∗}satisfies the conditions of Theorem 1. Finally, we use (11) to bound the worstcase pFDR achievable with A^{∗}. First of all, we note that the functionU\left(x\right)=\frac{x}{1{p}_{\mathrm{\text{max}}}x} is twice differentiable and convex on the interval [0,1−p_{max}]. Therefore, for any x∈[0,1−p_{max}] and any h > 0 small enough so that x + h∈[0,1−p_{max}], we have
Let us choose
Then from (13a) we have x + h≤1−ε−p_{max} < 1−p_{max}, and from (13c) we have x + h≥0. Hence, using (39) and simplifying, we obtain the bound
This proves the theorem for the case α_{1}=⋯=α_{ M }=α.
To handle the case when the α_{ i }’s are distinct, we simply let
and replace the definition of the error event{\mathcal{E}}_{2} with{\mathcal{E}}_{2}^{\prime}=\{{\hat{\mathit{f}}}_{{i}^{\ast}}\ne {\mathit{f}}_{{i}^{\ast}}^{\ast}\}. Then the same argument goes through, except that instead of (34) we use the bound
which follows from the following argument. First of all, we can replace the observation model with the equivalent model
where{\stackrel{~}{\mathit{n}}}_{i}=\frac{1}{{\alpha}_{i}}{\mathit{n}}_{i}\sim \mathcal{N}(0,\frac{1}{{\alpha}_{i}^{2}}\mathit{I}). Secondly, from the fact that α_{ i }≥α_{i∗}≡α_{min}for any i≠i^{∗} it follows that{\stackrel{~}{\mathit{n}}}_{{i}^{\ast}} is equal in distribution to{\stackrel{~}{\mathit{n}}}_{i}+{\stackrel{~}{\mathit{n}}}_{i}^{\prime}, where{\stackrel{~}{\mathit{n}}}_{i}^{\prime}\sim \mathcal{N}\left(0,\left(\frac{1}{{\alpha}_{i}^{2}}\frac{1}{{\alpha}_{\mathrm{min}}^{2}}\right)\mathit{I}\right) is independent of{\stackrel{~}{\mathit{n}}}_{i}. This implies that the i^{∗}th observation is the noisiest, and the corresponding MAP estimate{\hat{\mathit{f}}}_{{i}^{\ast}} has the largest probability of error.
Appendix 4: Proof of Theorem 4
We first prove this theorem assuming that {α_{ i }} are known and later extend to the case where\left\{{\hat{\alpha}}_{i}\right\} are estimated from the observations. Let{\stackrel{~}{\mathit{f}}}_{i}={argmin}_{\mathit{f}\in \mathcal{D}}\parallel {\mathit{f}}_{i}^{\ast}\mathit{f}\parallel. The pvalue expression in (17) can be expanded as follows:
Note that\parallel {\alpha}_{i}\mathit{A}({\mathit{f}}_{i}^{\ast}{\stackrel{~}{\mathit{f}}}_{i})+\mathit{n}{\parallel}^{2} is a noncentral χ^{2}random variable with K degrees of freedom and a noncentrality parameter{\nu}_{i}=\parallel {\alpha}_{i}\mathit{A}\left({\mathit{f}}_{i}^{\ast}{\stackrel{~}{\mathit{f}}}_{i}\right){\parallel}^{2}. Thus (40) can be written in terms of a noncentral χ^{2}CDF\mathcal{F}\left({d}_{i}^{2};K,{\nu}_{i}\right) with parameter{d}_{i}^{2}. The upper and lower bounds on ν_{ i }can be obtained using the properties of the projection matrix A. Applying (18), we see that
with high probability. Thus,
since\parallel {\mathit{f}}_{i}^{\ast}\mathit{f}\parallel \le \tau for all\mathit{f}\in \mathcal{D} under{\mathcal{\mathscr{H}}}_{0i}.
When {α_{ i }} are estimated from the observations such that\left\{{\hat{\alpha}}_{i}\right\} satisfy (19), we can write the pvalue expression in (41) as follows:
where (42) is due to the distance preservation property of A given in (18). Observe that{\u2225\frac{{\alpha}_{i}}{{\hat{\alpha}}_{i}}{\mathit{f}}_{i}^{\ast}{\stackrel{~}{\mathit{f}}}_{i}\u2225}^{2} can be upper bounded as shown below:
where thirdtolast equation is due to the triangle inequality, secondtolast equation comes from the assumption that\u2225{\mathit{f}}_{i}^{\ast}\u2225=1, and the last inequality is due to (19). By applying this result to (42) and exploiting the fact that\parallel {\mathit{f}}_{i}^{\ast}\mathit{f}\parallel \le \tau under{\mathcal{\mathscr{H}}}_{0i} for some\mathit{f}\in \mathcal{D}, we have
Endnotes
^{a} Note that τ cannot exceed\sqrt{2} because we assume that all targets of interest, including those in\mathcal{D} and the actual target f^{∗}, are unitnorm.^{b} The anomaly detection problem discussed here is more accurately described as target detection in the classical detection theory vocabulary. However, in recent works[24, 25], the authors assume that the nominal distribution is obtained from training data and a test sample is declared to be anomalous if it falls outside of the nominal distribution learned form the training data. Our work is in a similar spirit where we learn our dictionary from training data and label any test spectrum that does not correspond to our dictionary as being anomalous.^{c} The authors would like to thank Prof. Roummel Marcia for fruitful discussions related to this point.
References
Candès EJ, Tao T: Nearoptimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 2006, 52(12):54065425.
Donoho D: Compressed sensing. IEEE Trans. Info. Theory 2006, 52(4):12891306.
Davenport M, Duarte M, Wakin M, Laska J, Takhar D, Kelly K, Baraniuk R: The smashed filter for compressive classification and target recognition. Proceedings of SPIE, vol. 6498 (San Jose, CA, 2007), pp. 142–153
Duarte MF, Davenport MA, Wakin MB, Baraniuk RG: Sparse signal detection from incoherent projections. IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 3 (Toulouse, France, 2006), pp. 305–308
Haupt J, Castro R, Nowak R, Fudge G, Yeh A: Compressive sampling for signal classification. Fortieth Asilomar Conference on Signals, Systems and Computers,. 2006, pp. 1430–1434
Aeron S, Saligrama V, Zhao M: Information theoretic bounds for compressed sensing. Inf. IEEE Trans. Theory 2010, 56(10):51115130.
AriasCastro E, Eldar Y: Noise folding in compressed sensing. IEEE Signal Process. Lett 2011, 18: 478481.
Han J, Bhanu B: Fusion of color and infrared video for moving human detection. Pattern Recogn 2007, 40(6):17711784. 10.1016/j.patcog.2006.11.010
Johnson W, Wilson D, Fink W, Humayun M, Bearman G: Snapshot hyperspectral imaging in ophthalmology. J. Biomed. Optics 2007, 12(1):0140361–0140367. 10.1117/1.2434950
Lin R, Dennis B, Benz A: The Reuven Ramaty HighEnergy Solar Spectrscopic Imager (RHESSI)  Mission Description and Early Results. (Kluwer Academic Publishers, Dordrecht, 2003)
Martin M, Newman S, Aber J, Congalton R: Determining forest species composition using high spectral resolution remote sensing data. Remote Sens. Envir 1998, 65(3):249254. 10.1016/S00344257(98)000352
Martin M, Wabuyele M, Chen K, Kasili P, Panjehpour M, Phan M, Overholt B, Cunningham G, Wilson D, DeNovo R, VoDinh T: Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection. Ann. Biomed. Eng 2006, 34(6):10611068. 10.1007/s1043900691219
Miller J, Elvidge C, Rock B, Freemantle J: An airborne perspective on vegetation phenology from the analysis of AVRIS data sets over the Jasper ridge biological preserve. Geoscience and Remote Sensing Symposium (IGARSS’90): Remote sensing for the nineties (College Park, MD, 20–24 May 1990), pp. 565–568
Stellman C, Hazel G, Bucholtz F, Michalowicz J, Stocker A, Schaaf W: Realtime hyperspectral detection and cuing. Opt. Eng 2000, 39: 19281935. 10.1117/1.602577
Zuzak K, Naik S, Alexandrakis G, Hawkins D, Behbehani K, Livingston E: Intraoperative bile duct visualization using nearinfrared hyperspectral video imaging. Am. J. Surg 2008, 195(4):491497. 10.1016/j.amjsurg.2007.05.044
Brady D, Gehm M: Compressive imaging spectrometers using coded apertures. Proc. of SPIE, vol. 6246 (Kissimmee, Florida, 2006), pp. 62460A1–62460A9
DeVerse RA, Coifman RR, Coppi AC, Fateley WG, Geshwind F, Hammaker RM, Valenti S, Warner FJ, Davis GL: Application of Spatial Light Modulators for New Modalities in Spectrometry and Imaging. Spectral Imaging: Instrumentation, Applications, and Analysis II, vol. 4959, ed. by RM Levenson, GH Bearman, A MahadevanJansen (2003), pp. 12–22
Gehm M, John R, Brady D, Willett R, Schulz T: Singleshot compressive spectral imaging with a dualdisperser architecture. Opt. Express 2007, 15(21):1401314027. 10.1364/OE.15.014013
Takhar D, Laska J, Wakin MB, Duarte MF, Baron D, Sarvotham S, Kelly K, Baraniuk RG: A new compressive imaging camera architecture using opticaldomain compression. Proc. IS&T/SPIE Symposium on Electronic Imaging (San Jose, CA, 2006), pp. 43–52
Wagadarikar A, John R, Willett R, Brady D: Single disperser design for coded aperture snapshot spectral imaging. Appl. Opt 2008, 47(10):B44B51. 10.1364/AO.47.000B44
Woolfe F, Maggioni M, Davis G, Warner F, Coifman R, Zucker S: Hyperspectral microscopic discrimination between normal and cancerous colon biopsies. Manuscript (2006)
Manolakis D, Marden D, Shaw G: Hyperspectral image processing for automatic target detection applications. Lincoln Laboratory J 2003, 14(1):79116.
Wei G, Agnihotri L, Dimitrova N: TV program classification based on face and text processing. 2000 IEEE International Conference on Multimedia and Expo, ICME 2000, vol. 3 (2000), pp. 1345–1348
Hero AO: Geometric entropy minimization (GEM) for anomaly detection and localization. Proc. Advances in Neural Information Processing Systems (NIPS) (MIT Press, Vancouver, Canada, 2006), pp. 585–592
Steinwart I, Hush D, Scovel C: A classification framework for anomaly detection. J. Mach. Learn. Res 2005, 6: 211232.
Stein D, Beaven S, Hoff L, Winter E, Schaum A, Stocker A: Anomaly detection from hyperspectral imagery. IEEE Signal Process. Mag 2002, 19(1):5869. 10.1109/79.974730
Manolakis D, Shaw G: Detection algorithms for hyperspectral imaging applications. IEEE Signal Process. Mag 2002, 19(1):2943. 10.1109/79.974724
Berger JO: Statistical Decision Theory and Bayesian Analysis,. (Springer, New York, 1985)
Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B (Methodological) 1995, 57(1):289300.
Jin X, Paswaters S, Cline H: A comparative study of target detection algorithms for hyperspectral imagery. Proceedings of SPIE, vol. 7334 (2009), p. 73341W
Kelly E: An adaptive detection algorithm. IEEE Trans. Aerospace Electron. Syst. AES 1986, 22(2):115127.
Kraut S, Scharf L, McWhorter L: Adaptive subspace detectors. IEEE Trans. Signal Processing 2001, 49(1):116. 10.1109/78.890324
Scharf L, Friedlander B: Matched subspace detectors. IEEE Trans. Signal Process 1994, 42(8):21462157. 10.1109/78.301849
Kwon H, Nasrabadi N: Kernel matched subspace detectors for hyperspectral target detection. IEEE Trans. Pattern Anal. Mach. Intell 2006, 28(2):178194.
Scharf LL, McWhorter LT: Adaptive matched subspace detectors and adaptive coherence estimators. Conference Record of the Thirtieth Asilomar Conference on Signals, Systems and Computers (Pacific Grove, CA, 1996), pp. 1114–1117
Parmar M, Lansel S, Wandell B: Spatiospectral reconstruction of the multispectral datacube using sparse recovery. 15th IEEE International Conference on Image Processing (San Diego, CA, 2008), pp. 473–476
Willett R, Gehm M, Brady D: Multiscale reconstruction for computational spectral imaging. Comput. Imag. V 2007, 6498: 64980L1–64980L15.
Fowler J, Du Q: Anomaly detection and reconstruction from random projections. IEEE Trans. Image Process 2012, 21(1):184195.
Reed I, Yu X: Adaptive multipleband CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process 1990, 38(10):17601770. 10.1109/29.60107
Krishnamurthy K, Raginsky M, Willett R: Hyperspectral target detection from incoherent projections. IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) (Dallas, TX, 2010), pp. 3550–3553
Krishnamurthy K, Raginsky M, Willett R: Hyperspectral target detection from incoherent projections: nonequiprobable targets and inhomogenous SNR. 17th IEEE International Conference on Image Processing (ICIP) (Hongkong, 2010), pp. 1357–1360
Boardman JW: Spectral Angle Mapping: A Rapid Measure of Spectral Similarity. (AVIRIS, 1993)
Guo Z, Osher S: Template matching via L1 minimization and its application to hyperspectral data. Accepted to Inverse Problems and Imaging (IPI), 2009
Kwon H, Nasrabadi N: Kernel RXalgorithm: a nonlinear anomaly detector for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens 2005, 43(2):388397.
Szlam A, Guo Z, Osher S: A split Bregman method for nonnegative sparsity penalized least squares with applications to hyperspectral demixing. IEEE 17th International Conference on Image Processing (ICIP) (Hongkong, 2010), pp. 1917–1920
Chang C: Virtual dimensionality for hyperspectral imagery. SPIE Newsroom 2009, 10(2.1200909):1749.
Chang C, Du Q: Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens 2004, 42(3):608619. 10.1109/TGRS.2003.819189
Storey J: The positive false discovery rate: a Bayesian interpretation and the qvalue. Ann. Stat 2003, 20132035.
Johnson W, Lindenstrauss J: Extensions of Lipschitz maps into a Hilbert space. Contemp. Math 1984, 26: 189206.
Healey G, Slater D: Models and methods for automated material identification in hyperspectral imagery acquired under unknown illumination and atmospheric conditions. IEEE Trans. Geosci. Remote Sens 1999, 37(6):27062717. 10.1109/36.803418
Achlioptas D: Databasefriendly random projections. Proc. 20th ACM Symp. Principles of Database Systems (ACM Press, New York, NY 2001), pp. 274–281
Baraniuk R, Davenport M, DeVore R, Wakin M: A simple proof of the restricted isometry property for random matrices. Constructive Approx 2008, 28(3):253263. 10.1007/s003650079003x
Krahmer F, Ward R: New and improved johnsonlindenstrauss embeddings via the restricted isometry property. SIAM Journal on Mathematical Analysis 2011, 43(3):12691281. Arxiv preprint arXiv:1009.0744, 2010 10.1137/100810447
Wasserman L: All of Statistics: A Concise Course in Statistical Inference. (Springer, New York, NY 2004)
Tao T, Vu V: On random±1 matrices: singularity and determinant. Random Struct. Algor 2006, 28(1):123. 10.1002/rsa.20109
Tao T: Talagrand’s concentration inequality,. . Accessed on 08/03/2012 http://terrytao.wordpress.com/2009/06/09/talagrandsconcentrationinequality/
Kruse FA, Boardman JW, Lefkoff AB, Young JM, KiereinYoung KS, Cocks TD, Jensen R, Cocks PA: HyMap: an Australian hyperspectral sensor solving global problemsresults from USA HyMap data acquisitions. Proc. of the 10th Australasian Remote Sensing and Photogrammetry Conference (Adelaide, Australia, 2000), pp. 18–23
Davidson KR, Szarek SJ: Local operator theory, random matrices and Banach spaces. (NorthHolland, Amsterdam, 2001), pp. 317–366
Acknowledgements
This work was supported by the NSF Award No. DMS0811062, DARPA Grant No. HR00110910036, and AFRL Grant No. FA865007D1221.
Author information
Affiliations
Corresponding author
Additional information
Competing interest
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Krishnamurthy, K., Willett, R. & Raginsky, M. Target detection performance bounds in compressive imaging. EURASIP J. Adv. Signal Process. 2012, 205 (2012). https://doi.org/10.1186/168761802012205
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168761802012205
Keywords
 Target detection
 Anomaly detection
 False discovery rate
 _{i}pvalue
 Incoherent projections
 Compressive sensing