Target detection performance bounds in compressive imaging

This article describes computationally efficient approaches and associated theoretical performance guarantees for the detection of known targets and anomalies from few projection measurements of the underlying signals. The proposed approaches accommodate signals of different strengths contaminated by a colored Gaussian background, and perform detection without reconstructing the underlying signals from the observations. The theoretical performance bounds of the target detector highlight fundamental tradeoffs among the number of measurements collected, amount of background signal present, signal-to-noise ratio, and similarity among potential targets coming from a known dictionary. The anomaly detector is designed to control the number of false discoveries. The proposed approach does not depend on a known sparse representation of targets; rather, the theoretical performance bounds exploit the structure of a known dictionary of targets and the distance preservation property of the measurement matrix. Simulation experiments illustrate the practicality and effectiveness of the proposed approaches.


Introduction
The theory of compressive sensing (CS) has shown that it is possible to accurately reconstruct a sparse signal from few (relative to the signal dimension) projection measurements [1,2].Though such a reconstruction is crucial to visually inspect the signal, there are many instances where one is solely interested in identifying whether the underlying signal is one of several possible signals of interest.In such situations, a complete reconstruction is computationally expensive and does not optimize the correct performance metric.Recently, CS ideas have been exploited in [3][4][5] to perform target detection and classification from projection measurements, without reconstructing the underlying signal of interest.In [3,5], the authors propose nearest-neighbor based methods to classify a signal f ∈ R N to one of m known signals given projection measurements of the form y = Af + n ∈ R K for K ≤ N, where A ∈ R K×N is a known projection operator and n ∼ N 0, σ 2 I is the additive Gaussian noise.This model is simple to analyze, but is impractical, since in reality, a signal is always corrupted by some kind of interference or background noise.Extension of the methods in [3,5] to handle background noise is nontrivial.Though, Duarte et al. [4] provides a way to account for background contamination, it makes a strong assumption that the signal of interest and the background are sparse in bases that are incoherent.This might not always be true in many applications.Recent works on CS [6,7] allow for the input signal f to be corrupted by some pre-measurement noise b ∼ N 0, σ 2 b I such that one observes y = A(f + b) + n, and study reconstruction performance as a function of the number of measurements, pre-and post-measurement noise statistics and the dimension of the input signal.In this work, however, we are interested in performing target detection without an intermediate reconstruction step.Furthermore, the increased utility of high-dimensional imaging techniques such as spectral imaging or videography in applications like remote sensing, biomedical imaging and astronomical imaging [8][9][10][11][12][13][14][15] necessitates the extension of compressive target detection ideas to such imaging modalities to achieve reliable target detection from fewer measurements relative to the ambient signal dimensions.http://asp.eurasipjournals.com/content/2012/1/205For example, recent advances in CS have led to the development of new spectral imaging platforms which attempt to address challenges in conventional imaging platforms related to system size, resolution, and noise by acquiring fewer compressive measurements than spatiospectral voxels [16][17][18][19][20][21].However, these system designs have a number of degrees of freedom which influence subsequent data analysis.For instance, the single-shot compressive spectral imager discussed in [18] collects one coded projection of each spectrum in the scene.One projection per spectrum is sufficient for reconstructing spatially homogeneous spectral images, since projections of neighboring locations can be combined to infer each spectrum.Significantly more projections are required for detecting targets of unknown strengths without the benefit of spatial homogeneity.We are interested in investigating how several such systems can be used in parallel to reliably detect spectral targets and anomalies from different coded projections.
In general, we consider a broadly applicable framework that allows us to account for background and sensor noise, and perform target detection directly from projection measurements of signals obtained at different spatial or temporal locations.The precise problem formulation is provided below.

Problem formulation
Let us assume access to a dictionary of possible targets of interest D = {f (1) , f (2) , . . ., f (m) }, where f (j) ∈ R N for j = 1, . . ., m is unit-norm.Our measurements are of the form where • i ∈ {1, . . ., M} indexes the spatial or temporal locations at which data are collected; • α i ≥ 0 is a measure of the signal-to-noise ratio at location i, which is either known or estimated from observations; • Φ ∈ R K×N for K < N, is a measurement matrix to be specified in Section "Whitening compressive observations"; is the background noise vector, and w i ∈ R K ∼ N (0, σ 2 I) is the i.i.d.sensor noise.
For example, in the case of spectral imaging f * i represents the spectrum at the ith spatial location, and in video sequences f * i represents the vectorized image frame obtained at the ith time interval.In this article we consider the following target detection problems: (1) Dictionary signal detection (DSD): Here we assume that each f * i ∈ D for i ∈ {1, . . ., M}, and our task is to detect all instances of one target signal f (j) ∈ D for some unknown j ∈ {1, . . ., m}, i.e., to locate S = i : f * i = f (j) .DSD is useful in contexts in which we know the makeup of a scene and wish to focus our attention on the locations of a particular signal.For instance, in spectral imaging, DSD is used to study a scene of interest by classifying every spectrum in the scene to different known classes [11,22].In a video setup, DSD could be used to classify video segments to one of several categories (such as news, weather, sports, etc.) by projecting the video sequence to an appropriate feature space and comparing the feature vectors to the ones in a known dictionary [23].
(2) Anomalous signal detection (ASD): Here, our task is to detect all signals which are not members of our dictionary, i.e., detect S = i : f * i / ∈ D (this is akin to anomaly detection methods in the literature which are based on nominal, nonanomalous training samples [24,25]).For instance, ASD may be used when we know most components of a spectral image and wish to identify all spectra which deviate from this model [26].
Our goal is to accurately perform DSD or ASD without reconstructing the spectral input f * i from z i for i ∈ {1, . . ., M}. Accounting for background is a crucial issue.Typically, the background corresponding to the scene of interest and the sensor noise are modeled together by a colored multivariate Gaussian distribution [27].However, in our case, it is important to distinguish the two because of the presence of the projection operator Φ.The projection operator acts upon the background spectrum in the same way as on the target spectrum, but it does not affect the sensor noise.We assume that b i and w i are independent of each other, and the prior probabilities of different targets in the dictionary p (j) = P f * i = f (j) for j ∈ {1, • • • , m} are known in advance.If these probabilities are unknown, then the targets can be considered equally likely.Given this setup, our goal is to develop suitable target and anomaly detection approaches, and provide theoretical guarantees on their performances.
In this article, we develop detection performance bounds which show how performance scales with the number of detectors in a compressive setting as a function of SNR, the similarity between potential targets in a known dictionary, and their prior probabilities.Our bounds are based on a detection strategy which operates directly on the collected data as opposed to first reconstructing each f * i and then performing detection on the estimated signals.Reconstruction as an intermediate step in detection may be appealing to end users who wish to visually inspect spectral images instead of relying entirely http://asp.eurasipjournals.com/content/2012/1/205 on an automatic detection algorithm.However, using this intermediate step has two potential pitfalls.First, the Rao-Blackwell theorem [28] tells us that an optimal detection algorithm operating on the processed data (i.e., not sufficient statistics) cannot perform better than an optimal detection algorithm operating on the raw data.In other words, optimal performance is possible on the raw data, but we have no such performance guarantee for the reconstructed signals.Second, the relationship between reconstruction errors and detection performance is not well understood in many settings.Although we do not reconstruct the underlying signals, our performance bounds are intimately related to the signal resolution needed to achieve the signal diversity present in our dictionary.Since we have many fewer observations than the signals at this resolution, we adopt the "compressive" terminology.

Performance metric
To assess the performance of our detection strategies, we consider the false discovery rate (FDR) metric and related quantities developed for multiple hypothesis testing problems [29].Since we collect M independent observations of potentially different signals, we are simultaneously conducting M hypothesis tests when we search for targets.Unlike the probability of false alarm, which measures the probability of falsely declaring a target for a single test, the FDR measures the fraction of declared targets that are false alarms, that is, it provides information about the entire set of M hypotheses instead of just one.More formally, the FDR is given by, where V is the number of falsely rejected null hypotheses, and R is the total number of rejected null hypotheses.Controlling the FDR in a multiple hypothesis testing framework is akin to designing a constant false alarm rate (CFAR) detector in spectral target detection applications that keeps the false alarm rate at a desired level irrespective of the background interference and sensor noise statistics [22].

Previous investigations
Much of the classical target detection literature [30][31][32][33][34] assume that each target lies in a P-dimensional subspace of R N for P < N. The subspace in which the target lies is often assumed to be known or specified by the user, and the variability of the background is modeled using a probability distribution.Given knowledge of the target subspace, background statistics and sensor noise statistics, detection methods based on LRTs (likelihood ratio tests) and GLRTs (generalized likelihood ratio tests) have been proposed in [30][31][32][33][34][35].A subspace model is optimal if the subspace in which targets lie is known in advance.However, in many applications, such subspaces might be hard to characterize.An alternative, and a more flexible option is to assume that the high-dimensional target exhibits some low-dimensional structure that can be exploited to perform efficient target detection.This approach is utilized in this work and in [5] where the target signal in R N is assumed to come from a dictionary of m known signals such that m N, and in [3], where the targets are assumed to lie in a low-dimensional manifold embedded in high-dimensional target space.
Recently, several methods for target or anomaly detection that rely on recovering the full spatiospectral data from projection measurements [36,37] have been proposed.However, they are computationally intensive and the detection performance associated with these reconstructions is unknown.Other researchers have exploited CS to perform target detection and classification without reconstructing the underlying signal [3][4][5].Duarte et al. [4] propose a matching pursuit based algorithm, called the incoherent detection and estimation algorithm (IDEA), to detect the presence of a signal of interest against a strong interfering signal from noisy projection measurements.The algorithm is shown to perform well on experimental data sets under some strong assumptions on the sparsity of the signal of interest and the interfering signal.Davenport et al. [3] develop a classification algorithm called the smashed filter to classify an image in R N to one of m known classes from K projections of the signal, where K < N. The underlying image is assumed to lie on a low-dimensional manifold, and the algorithm finds the closest match from the m known classes by performing a nearest neighbor search over the m different manifolds.The projection measurements are chosen to preserve the distances among the manifolds.Though Davenport et al. [3] offers theoretical bounds on the number of measurements necessary to preserve distances among different manifolds, it is not clear how the performance scales with K or how to incorporate background models into this setup.Moreover, this approach may be computationally intensive since it involves learning and searching over different manifolds.Haupt et al. [5] use a nearest-neighbor classifier to classify an N-dimensional signal to one of m equally likely target classes based on K < N random projections, and provide theoretical guarantees on the detector performance.While the method discussed in [5] is computationally efficient, it is nontrivial to extend to the case of target detection with colored background noise and nonequiprobable targets.Furthermore, their performance guarantees cannot be directly extended to our problem since we focus on error measures that let us analyze the performance of multiple hypothesis tests simultaneously as opposed to the above methods that http://asp.eurasipjournals.com/content/2012/1/205consider compressive classification performance for a single hypothesis test.
The authors of a more recent work [38] extend the classical RX anomaly detector [39] to directly detect anomalies from random, orthonormal projection measurements without an intermediate reconstruction step.They numerically show how the detection probability improves as a function of the signal-to-noise ratio when the number of measurements changes.Though probability of detection is a good performance measure, in many applications controlling the false discoveries below a desired level is more crucial.As a result, in our work, we propose an anomaly detection method that controls the FDR below a desired level.

Contributions
This article makes the following contributions to the above literature: • A compressive target detection approach, which (a) is computationally efficient, (b) allows for the signal strengths of the targets to vary with spatial location, (c) allows for backgrounds mixed with potential targets, (d) considers targets with different a priori probabilities, and (e) yields theoretical guarantees on detector performance.This article unifies preliminary work by the authors [40,41], presents previously unpublished aspects of the proofs, and contains updated experimental results.• A computationally efficient anomaly detection method that detects anomalies of different strengths from projection measurements and also controls the FDR at a desired level.• A whitening filter approach to compressive measurements of signals with background contamination, and associated analysis leading to bounds on the amount of background to which our detection procedure is robust.
The above theoretical results, which are the main focus of this article, are supported with simulation studies in Section "Experimental results".Classical detection methods described in [22,26,27,[30][31][32][33][34][35]39,[42][43][44][45] do not establish performance bounds as a function of signal resolution or target dictionary properties and rely on relatively direct observation models which we show to be suboptimal when the detector size is limited.The methods in [3,4] do not contain performance analysis, and our analysis builds upon the analysis in [5] to account for several specific aspects of the compressive target detection problem.

Whitening compressive observations
Before we present our detection methods for DSD and ASD problems, respectively, we briefly discuss a whitening step that is common to both our problems of interest.
Let us suppose that there are enough background training data available to estimate the background mean μ b and covariance matrix Σ b .We can assume without loss of generality that μ b = 0 since Φμ b can be subtracted from y.Given the knowledge of the background statistics, we can transform the background and sensor noise model Φb i + w i ∼ N (0, ΦΣ b Φ T + σ 2 I) discussed in (1) to a simple white Gaussian noise model by multiplying the observations z i , i ∈ {1, . . ., M}, by the whitening filter C Φ (ΦΣ b Φ T + σ 2 I) −1/2 .This whitening transformation reduces the observation model in (1) to where and To verify that n i ∼ N (0, I), observe that We can now choose Φ so that the corresponding A has certain desirable properties as detailed in Sections "Dictionary signal detection" and "Anomalous signal detection".For a given A, the following theorem provides a construction of Φ that satisfies (3) and a bound on the maximum tolerable background contamination: where A is the spectral norm of A, then B is positive definite and Φ = σ B −1/2 A is a sensing matrix, which can be used in conjunction with a whitening filter to produce observations modeled in (2).
The proof of this theorem is provided in Appendix 1.This theorem draws an interesting relationship between the maximum background perturbation that the system can tolerate and the spectral norm of the measurement matrix, which in turn varies with K and N. Hardware designs such as those in [17,19] use spatial light modulators and digital micro mirrors, which allow the measurement matrix Φ to be adjusted easily in response to changing background statistics and other operating conditions.
In the sections that follow, we consider collecting measurements of the form y i = α i Af * i + n i given in (2), where f * i is the target of interest for i = 1, . . ., M, and A ∈ R K×N is a sensing matrix that satisfies (3).It is assumed that any background contamination has been eliminated with the whitening procedure described in this section.

Dictionary signal detection
Suppose that the end user wants to test for the presence of one known target versus the rest, but it is not known a priori which target from D the user wants to detect.In this case, let us cast the DSD problem as a multiple hypothesis testing problem of the form where f (j) ∈ D is the target of interest and i = 1, . . ., M.

Decision rule
We define our decision rule corresponding to target i such that one rejects the ith null hypothesis if its test statistic y i falls in the ith significance region.Specifically, Γ (j) i is defined according to log + log p (j) is the logarithm of the a posteriori probability density of the target f (j) at the ith spatial location given the observations y i , the signal-to-noise ratio α i and the sensing matrix A, and p (j) is the a priori probability of target class j.Note that the process of determining these decision regions involves a sequence of nearest-neighbor calculations, so the computational complexity scales with the number of classes m.In this work, we operate under the assumption that m is much smaller than the dimensionality of the datasets we consider.For example, if we consider spectral images, then the number of objects (signal classes) that make up a scene of interest is often smaller than the number of voxels in the image.This assumption is not unrealistic and has been exploited in earlier work such as [22] and the references therein.In most of the prior work we have surveyed [46,47], the number of signal classes is less than 35, which doesn't make our approach intractable.
The decision rule can be formally expressed in terms of the significance regions as follows: reject We analyze this detector by extending the positive FDR (pFDR) error measure introduced by Storey to characterize the errors encountered in performing multiple, independent and nonidentical hypothesis tests simultaneously [48].The pFDR, discussed formally below, is the fraction of falsely rejected null hypotheses among the total number of rejected null hypotheses, subject to the positivity condition that one rejects at least one null hypothesis.The pFDR is similar to the FDR except that the positivity condition is enforced here.In our context, the positivity condition means that we declare at least one signal to be a nontarget, which in turn implies that the scene of interest is composed of more than one object in the case of spectral imaging, or that the scene is not static in the case of video imaging.
Consider a collection of significance regions Γ = Γ (j) i .The pFDR for multiple, nonidentical hypothesis tests can be defined in terms of the significance regions as follows: where is the number of falsely rejected null hypotheses, is the total number of rejected null hypotheses, and I {E} = 1 if event E is true and 0 otherwise.In our setup, the pFDR corresponds to the expected ratio of the number of missed targets to the number of signals declared to be nontargets subject to the condition that at least one signal is declared to be a nontarget (note that this ratio is traditionally referred to as the positive false nondiscovery rate (pFNR), but is technically the pFDR in this context because of our definitions of the null and alternate hypotheses).The theorem below presents our main result: Theorem 2. Given observations of the form (2), if one performs multiple, independent, nonidentical hypothesis tests of the form (5) and decides according to http://asp.eurasipjournals.com/content/2012/1/205(7), then the worst-case pFDR given by pFDR max = max j∈{1,...,m} pFDR (j) (Γ ) , satisfies the following bound: (P e ) max 1 − p max − (P e ) max (11) where p max = max j∈{1,...,m} p (j) , The proof of this theorem is detailed in Appendix 2. A key element of our proof is the adaptation of the techniques from [48] to nonidentical independent hypothesis tests.

An achievable bound on the worst-case pFDR
Theorem 2 in the preceding section shows that, for a given A, the worst-case pFDR is bounded from above by a function of the worst-case misclassification probability.In this section, we use this theorem to establish an achievable bound on the worst-case pFDR that explicitly depends on the number of observations K, signal strengths {α i } M i=1 , similarity among different targets of interest, and a priori target probabilities.
Then we have the following theorem, whose proof is given in Appendix 3: Theorem 3. Let λ max denote the largest eigenvalue of Σ b .For a given 0 < < 1 − p max , assume that K and N are sufficiently large so that the following conditions hold: Then there exists a K × N sensing matrix A that satisfies the condition of Theorem 1, and for which This result has the following implications and consequences: (1) For a given N, the upper bound (13b) on λ max increases as K increases, which implies that the system can tolerate more background perturbation if we collect more measurements.(2) The pFDR bound ( 14) decays with the increase in the values of K, d min and α min , and increases as p min decreases.For a fixed p max , p min , α min and d min , the bound in ( 14) enables one to choose a value of K to guarantee a desired pFDR value.(3) The dominant part of the bound ( 14) is independent of N, and is only a function of K, p max , p min , α min , and d min .The lack of dependence on N is not unexpected.Indeed, when we are interested in preserving pairwise distances among the members of a fixed dictionary of size m, the Johnson-Lindenstrauss lemma [49] says that, with high probability, K = O log m random Gaussian projections suffice, regardless of the ambient dimension N.This is precisely the regime we are working with here.(4) The bound on K given in (13c) increases logarithmically with the increase in the difference between p max and p min .This is to be expected since one would need more measurements to detect a less probable target as our decision rule weights each target by its a priori probability.If all targets are equally likely, then p max = p min = 1/m, and (where the first inequality holds since K < N).In addition, the lower bound on K also illustrates the interplay between the signal strength of the targets, the similarity among different targets in D, and the number of measurements collected.A small value of d min suggests that the targets in D are very similar to each other, and thus α min and K need to be high enough so that similar targets can still be distinguished.The experimental results discussed in http://asp.eurasipjournals.com/content/2012/1/205 Section "Experimental results" illustrate the tightness of the theoretical results discussed here.
Inspection of the proof shows that if A is generated according to a Gaussian distribution, then the conditions of Theorem 3 will be met with high probability.

Extension to a manifold-based target detection framework
The DSD problem formulation in Section "ASD problem formulation" is accurate if the signals in the dictionary are faithful representations of the target signals that we observe.In reality, however, the target signals will differ from the dictionary signals owing to the differences in the experimental conditions under which they are collected.For instance, in spectral imaging applications, the observed spectrum of any material will not match the reference spectrum of the same material observed in a laboratory because of the differences in atmospheric and illumination conditions.To overcome this problem, one could form a large dictionary to account for such uncertainties in the target signals and perform target detection according to the approaches discussed in Sections "Whitening compressive observations" and "Dictionary signal detection".A potential drawback with this approach is that our theoretical performance bound increases with the size of D through p min and d min .Instead, one could reasonably model the target signals observed under different experimental conditions to lie in a low-dimensional submanifold of the high-dimensional ambient signal space as shown to be true for spectral images in [50].We can exploit this result to extend our analysis to a much broader framework that accounts for uncertainties in our dictionary.
Let us consider a dictionary of manifolds D M = M (1) , . . ., M (m) corresponding to m different target classes, and that f * i for i ∈ {1, . . ., M} is in one of the manifolds in D M .Considering an observation model of the form given in (2), our goal is to determine i : , where j ∈ {1, . . ., m} is the target class of interest.Let us assume that all target classes are equally likely to keep the presentation simple, though the analysis extends to the case where the targets classes have different a priori probabilities.Suppose that we collect independent sets of measurements y i M i=1 and y i M i=1 .Then, we can use the following two-step procedure to extend our DSD method to this manifold-based framework: (1) Given y i , form a data-dependent dictionary corresponding to each y i by finding its nearest-neighbor in each manifold: for ∈ {1, . . ., m} and i = 1, . . ., M. (2) Given y i and corresponding D y i , find and declare that the i th observed spectrum corresponds to class j if This two-step procedure is studied in [3] for the case y i = y i where the authors provide bounds on the number of projection measurements needed to preserve distances among manifolds.However, they do not offer associated target detection performance guarantees.Our analysis and the theoretical performance bounds extend directly to this framework, if we collect two sets of observations as discussed above.Specifically, the hypothesis tests corresponding to the second step can be written as where f Since the dictionary in this case changes with i, these tests are nonidentical.This is another instance where our extension of pFDRbased analysis towards simultaneous testing of multiple, independent, and nonidentical hypothesis tests (8) is very significant.Following the proof techniques discussed in the appendix, we can straightforwardly show that the bound in (14) in this manifold setting holds with p min = p max = 1/m since all target classes are assumed to be equally likely here, and d min = min i∈{1,...,M} d i where .

Anomalous signal detection
The target detection approach discussed above assumes that the target signal of interest resides in a dictionary that is available to the user.However, in some applications (such as military applications and surveillance), one might be interested in detecting objects not in the dictionary.In other words, the target signals of interest are anomalous and are not available to the user.In this section, we show how the target detection methods discussed above can be extended to anomaly detection.In particular, we exploit the distance preservation property of the sensing matrix A to detect anomalous targets from projection measurements.

ASD problem formulation
Given observations of the form in (2), we are interested in detecting whether f * ∈ D or f * is anomalous.Let us write http://asp.eurasipjournals.com/content/2012/1/205 the anomaly detection problem as the following multiple hypothesis test: where τ ∈ 0, √ 2 is a user-defined threshold that encapsulates our uncertainty about the accuracy with which we know the dictionary.a In particular, τ controls how different a signal needs to be from every dictionary element to truly be considered anomalous.In the absence of any prior knowledge on the targets of interest, τ can simply be set to zero.The null hypothesis in this setting models the normal behavior, while the alternative hypothesis models the abnormal or anomalous behavior.This formulation is consistent with the literature [26,38].
Note that the definition of the hypotheses given in (15a) and (15b) matches the definition in (5) for the special case where the dictionary contains just one signal.In this special case, the signal input f * is in the dictionary under the null hypothesis in both DSD and ASD problem formulations.b

Anomaly detection approach
Our anomaly detection approach and the associated theoretical analysis are based on a "distance preservation" property of A, which is stated formally in (18).We propose an anomaly detection method that controls the FDR below a desired level δ for different background and sensor noise statistics.In other words, we control the expected ratio of falsely declared anomalies to the total number of signals declared to be anomalous.Note that here we work with the FDR as opposed to the pFDR, since it is possible for a scene to not contain any anomalies at all.We let V /R = 0 for R = V = 0 since one does not declare any signal to be anomalous in this case.In [29], Benjamini and Hochberg discuss a p-value based procedure, "BH procedure", that controls the FDR of M independent hypothesis tests below a desired level.Let, be the test statistic at the ith location.The p-value can be defined in terms of our test statistic as follows: where and n ∼ N (0, I) is independent of n i .This is the probability under the null hypothesis, of acquiring a test statistic at least as extreme as the one observed.Let us denote the ordered set of p-values by p (1) ≤ p (2) ≤ • • • ≤ p (M) and let H (0i) be the null hypothesis corresponding to (i) th p-value.The BH procedure says that if we reject all H (0i) for i = 1, . . ., t where t is the largest i for which p (i) ≤ iδ/M, then the FDR is controlled at δ.
To apply this procedure in our setting, we need to find a tractable expression for the p-value at every location.This can be accomplished when A satisfies the distancepreservation condition stated below.Let V = D {f * i : i ∈ {1, . . ., M}} be the set of all signals in the dictionary and the ones whose projections are measured.Note that |V | ≤ M + m.For a given ∈ (0, 1), a projection operator A ∈ R K×N , K ≤ N, is distance-preserving on V if the following holds for all u, v ∈ V : The existence of such projection operators is guaranteed by the celebrated Johnson and Lindenstrauss (JL) lemma [49], which says that there exists random constructions of A for which (18) holds with probability at least [51,52].Examples of such constructions are: (a) Gaussian matrices whose entries are drawn from N (0, 1/K), (b) Bernoulli matrices whose entries are ±1/ √ N with probability 1/2, (c) random matrices whose entries are ± √ 3/N with probability 1/6 and zero with probability 2/3 [51,52], and (d) matrices that satisfy the restricted isometry property (RIP) where the signs of the entries in each column are randomized [53].
We now state our main theorem that gives a tight upper bound on the p-value at every location when {α i } are unknown and are estimated from the observations.Let { α i } be the estimates of {α i } that satisfy for i = 1, . . ., M where ζ ∈[ 0, 1] is a measure of the accuracy of the estimation procedure.
Theorem 4. If the ith hypothesis test is defined according to (15a) and (15b), the projection matrix A satisfies (18) for a given ∈ (0, 1), and the estimates { α i } satisfy (19) for some ζ ∈[ 0, 1], then the bound holds for all i = 1, . . ., M where F (•; K, ν) is the CDF of a noncentral χ 2 random variable with K degrees of freedom and noncentrality parameter ν [54].http://asp.eurasipjournals.com/content/2012/1/205 The proof of this theorem is given in Appendix 4. We find the p-value upper bounds at every location and use the BH procedure to perform anomaly detection.The performance of this procedure depends on the values of K, {α i }, τ and .The parameter is a measure of the accuracy with which the projection matrix A preserves the distances between any two vectors in R N .A value of close to zero implies that the distances are preserved fairly accurately.When {α i } are unknown and estimated from the observations, the performance depends on the accuracy of the estimation procedure, which is reflected in our bounds in (20) through ζ .
One can easily estimate {α i } from {y i } for some choices of A. For instance, if the entries of the projection matrix A are drawn from N (0, 1/K), the {α i } can be estimated using a maximum likelihood estimator (MLE) by exploiting the statistics of the projection matrix and noise.Note that the jth element of the ith measured spectrum is ∼ N 0, The MLE of α i given by α i = arg max α P(y i |A, α) then reduces to In practice, we use α i = y i 2 − K + where the (a) + = a if a ≥ 0 and 0 otherwise to ensure that y i 2 − K is nonnegative.We can use concentration inequalities to show that with high probability, y i 2 2 is tightly concentrated around its mean E y i 2), and ( [56], Proposition 1 and Remark 1), for any t > 0 for some absolute constants C, c > 0. This result shows that with high probability, y i 2 2 − K is nonnegative.The experimental results discussed in Section "Experimental results" demonstrate the performance of this detector as a function of K, {α i } and τ when {α i } are known and as a function of K, τ and ζ when {α i } are estimated.

Experimental results
In the experiments that follow, the entries of A are drawn from N (0, 1/K).

Dictionary signal detection
To test the effectiveness of our approach, we formed a dictionary D of nine spectra (corresponding to different kinds of trees, grass, water bodies and roads) obtained from a labeled HyMap (Hyperspectral Mapper) remote sensing data set [57], and simulated a realistic dataset using the spectra from this dictionary.Each HyMap spectrum is of length N = 106.We generated projection measurements of these data such that z i = α i Φ(f * i + b i ) + w i according to (1), where 4), and ] and U denotes uniform distribution.We let σ 2 = 5 and model {α i } to be proportional to √ K to account for the fact that the total observed signal energy increases as the number of detectors increases.We transform the z i by a series of operations to arrive at a model of the form discussed in (2), which is y i = α i Af * i + n i .For this dataset, p min = 0.04938, p max = 0.1481, and d min = 0.04341.
We evaluate the performance of our detector (7) on the transformed observations, relative to the number of measurements K, by comparing the detection results to the ground truth.Our MAP detector returns a label L MAP i for every observed spectrum which is determined according to where m is the number of signals in D, and p ( ) is the a priori probability of target class .In our experiments we evaluate the performance of our classifier when (a) {α i } are known (AK) and (b) {α i } are unknown (AU) and must be estimated from y, respectively.The empirical pFDR (j) for each target spectrum j is calculated as follows: where {L GT i } denote the ground truth labels.The empirical pFDR (•) is the ratio of the number of missed targets to the total number of signals that were declared to be nontargets.The plots in Figure 1a show the results obtained using our target detection approach under the AK case (shown by a dark gray dashed line) and the AU case (shown by a light gray dashed line), compared to the theoretical upper bound (shown by a solid line).These results are obtained by averaging the pFDR values obtained over 1000 different noise, sensing matrix and background realizations.Note that theoretical results only apply to the AK http://asp.eurasipjournals.com/content/2012/1/205Comparison of the worst-case empirical pFDR curves with the theoretical bounds when SNR is high.(b) Comparison of the results obtained by the proposed method using projection measurements using Φ designed according to (24), Φ chosen at random, and the ones using downsampled measurements (DM) when the SNR is low.case since they were derived under the assumption of {α i } being known.The experimental results are shown for both AK and AU cases to provide a comparison between the two scenarios.In both these cases, the worst-case empirical pFDR curves decay with the increase in the values of K.In the AK case, in particular, the worst-case empirical pFDR curve decays at the same rate as the upper bound.In this experiment, for a fixed α min and d min , we chose K to satisfy (13c).The theory is somewhat conservative, and in practice the method works well even when the values of K are below the bound in (13c).
In the experiment that follows, we let α * i ∼ U [ 10,20], where U denotes a uniform random variable, α i = √ Kα * i and evaluate the performance of our detector for different values of K that are not necessarily chosen to satisfy (13c).In addition, we also compare the performance of our detection method to that of a MAP based target detector operating on downsampled versions of our simulated spectral input image.The reason behind such a comparison is to show what kinds of measurements yield better results given a fixed number of detectors.
For an input spectrum g ∈ R N , we let g ∈ R K denote its downsampled approximation.Specifically, the jth element of g i is r =1 g (j−1)r+ where r = N/K .Let us consider making observations of the form where for σ 2 = 5 and c is a constant that is chosen to preserve the mean signal-to-noise ratio corresponding to the downsampled and projection measurements.The MAP-based detector operating on the downsampled data returns a label D MAP i for every observed spectrum which is determined according to where G = Σ b + σ 2 I and Σ b is the covariance matrix obtained from the downsampled versions of the background training data and f ( ) is the downsampled version of f ( ) ∈ D. The algorithm declares that target spectrum In order to illustrate the advantages of using a Φ designed according to (24), we compare the performances of the proposed anomaly detector when Φ is chosen to be a random Gaussian matrix whose entries are drawn from N (0, 1/K) and when Φ is chosen according to (24). Figure 1b shows a comparison of the results obtained using the projection measurements obtained using Φ designed according to (24), Φ chosen at random, and the downsampled measurements under the AK case.These results show that the detection algorithm operating on projection measurements using Φ designed using background and sensor noise statistics yield significantly better results than the one operating on the downsampled data, and that the empirical pFDR values in our method decays with K.The improvement in performance using projection measurements comes from the distance-preservation property of the projection operator A. While a Gaussian sensing matrix A preserves distances between any pair of vectors from a finite collection of vectors with high probability [51,52], downsampling loses some of the fine differences between similar-looking spectra in the dictionary.Furthermore, when Φ is chosen at random, the resulting whitened transformation matrix is not necessarily http://asp.eurasipjournals.com/content/2012/1/205distance-preserving.This may worsen the performance as illustrated in Figure 1b.

Anomaly detection
In this section, we evaluate the performance of our anomaly detection method on (a) a simulated dataset and provide a comparison of the results obtained using the proposed projection measurements and the ones obtained using downsampled measurements, and (b) real AVIRIS (Airborne Visible InfraRed Imaging Spectrometer) dataset.

Experiments on simulated data
We simulate a spectral image f * composed of 8100 spectra, where each of them is either drawn from a dictionary D = {f (1) , • • • , f (5) } consisting of five labeled spectra from the HyMap data that correspond to a natural landscape (trees, grass and lakes) or is anomalous.The anomalous spectrum is extracted from unlabeled AVIRIS data, and the minimum distance between the anomalous spectrum f (a) and any of the spectra in D is d min = min f ∈D f − f (a) = 0.5308.The simulated data has 625 locations that contain the anomalous spectrum.Our goal is to find the spatial locations that contain the anomalous AVIRIS spectrum given noisy measurements of the form , Φ is designed according to (24), w i ∼ N (0, σ 2 I) and f * i ∈ D under H 0i .As discussed in Section "Anomalous signal detection", f * i is anomalous under H 1i , and our goal is to control the FDR below a user-specified false discovery level δ.We simulate {α i } = √ Kα * i where α * i ∼ U [ 2,3].In this experiment we assume the availability of background training data to estimate the background statistics and the sensor noise variance σ 2 .Given the knowledge of the background statistics, we perform the whitening transformation discussed in Section "Whitening compressive observations" and evaluate the detection performance on the preprocessed observations given by (2).
For a fixed τ = 0.1 and = 0.1, we evaluate the performance of the detector as the number of measurements K increases under the AK and AU cases respectively, by comparing the pseudo-ROC (receiver operating characteristic) curves obtained by plotting the empirical FDR against 1−FNR, where FNR is the false nondiscovery rate.Note that 1 − FNR is the expected ratio of the number of null hypotheses that are correctly rejected to the number of declared null hypotheses.The empirical FDR and FNR are computed according to where p t is the p-value threshold such that the BH procedure rejects all null hypotheses for which p i ≤ p t , and the ground truth label L GT i = 0 if the ith spectrum is not anomalous, and 1 otherwise.In this experiment, we consider three different values of K approximately given by K ∈ {N/6, N/3, N/2} where N = 106, and evaluate the performance of our detector for each K. Furthermore, in our experiments with simulated data, we declare a spectrum to be anomalous if d i ≥ η where η is a user-specified threshold and d i is defined in (16).We use the p-value upper bound in (20) in our experiments with real data where the ground truth is unknown.
We compare the performance of our method to a generalized likelihood ratio test (GLRT)-based procedure operating on downsampled data, where we collect measurements of the form in (23) and where f refers to the downsampled version of f ∈ D. In this experiment we assume that each spectrum in D is equally likely under H 0i for i = 1, . . ., M. The GLRT-based approach declares the ith spectrum to be anomalous if where η is a user-specified threshold [26].While our anomaly detection method is designed to control the FDR below a user-specified threshold, the GLRT-based method is designed to increase the probability of detection while keeping the probability of false alarm as low as possible.To facilitate a fair evaluation of these methods, we compare the pseudo-ROC curves (FDR versus 1 − FNR) and the actual ROC curves (probability of false alarm p f versus probability of detection p d ) corresponding to these methods obtained by averaging the empirical FDR, FNR, p d and p f over 1,000 different noise and sensing matrix realizations for different values of K. We also compare the performance of the proposed method when Φ is chosen according to (24) and when it is chosen at random, as discussed in the previous section.Figure 2a,e show the pseudo-ROC plots and the conventional ROC plots obtained using the GLRT-based method operating on downsampled data when {α i } are known.Figure 2b,f show the results obtained by using a random Gaussian Φ instead of the Φ in (24).Figure 2c,g show the pseudo-ROC plots and the conventional ROC plots obtained using our method when {α i } are known.These plots show that performing anomaly detection from our designed projection measurements yields better results than performing anomaly detection on downsampled measurements and on measurements obtained using a random Gaussian Φ.This is largely due to the fact that carefully chosen projection measurements preserve distances (up to a constant factor) among pairs of vectors http://asp.eurasipjournals.com/content/2012/1/205 in a finite collection, where as the downsampled measurements fail to preserve distances among vectors that are very similar to each other.Similarly, a random projection matrix Φ is not necessarily distance-preserving post-whitening transformation, which leads to poor performance as illustrated in Figure 2b,f.Figure 2d,h shows the pseudo-ROC plots and the conventional ROC plots obtained using our method when {α i } are unknown, and are estimated from the measurements.Note that the value of ζ decreases as K increases since the estimation accuracy of {α i } increases with increase in K.These plots show that the performance improves as we collect more observations, and that, as expected, the performance under the AK case is better than the performance under the AU case.

Experiments on real AVIRIS data
To test the performance of our anomaly detector on a real dataset, we consider the unlabeled AVIRIS Jasper Ridge dataset g ∈ R 614×512×197 , which is publicly available from the NASA AVIRIS website, http://aviris.jpl.nasa.gov/html/aviris.freedata.html.We split this data spatially to form equisized training and validation datasets, g t and g v respectively, each of which is of size 128 × 128 × 197. Figure 3a,b show images of the AVIRIS training and validation data summed through the spectral coordinates.
The training data are comprised of a rocky terrain with a small patch of trees.The validation data seems to be made of a similar rocky terrain, but also contain an anomalous lake-like structure.The goal is to evaluate the performance of the detector in detecting the anomalous region in the validation data for different values of K. We cluster the spectral targets in the normalized training data to eight different clusters using the K-means clustering algorithm and form a dictionary D comprising of the cluster centroids.Given the dictionary and the validation data, we find the ground truth by labeling the ith validation spectrum as anomalous if min f ∈D f − Since the statistics of the possible background contamination in the data could not be learned in this experiment because of the lack of labeled training data, the dictionary might be background contaminated as well.The parameter τ encapsulates this uncertainty in our knowledge of the dictionary.In this experiment, we set τ = 0.2.
We generate measurements of the form y i = √ Kg v i + n i for i = 1, . . ., 128 × 128, where n i ∼ N (0, I).The √ K factor indicates that the observed signal strength increases with K.For a fixed FDR control value of 0.01, Figure 3c,d shows the results obtained for K ≈ N/5 and K ≈ N/2, respectively.Figure 3e shows how the probability of error decays as a function of the number of measurements K.The results presented here are obtained by averaging over 1,000 different noise and sensing matrix realizations.From these results, we can see that the number of detected anomalies increases with K and the number of misclassifications decrease with K.

Conclusion
This work presents computationally efficient approaches for detecting known targets and anomalies of different strengths from projection measurements without performing a complete reconstruction of the underlying signals, and offers theoretical bounds on the worst-case target detector performance.This article treats each signal as independent of its spatial or temporal neighbors.This assumption is reasonable in many contexts, especially when the spatial or temporal resolution is low relative to the spatial homogeneity of the environment or the pace with which a scene changes.However, emerging technologies in computational optical systems continue to improve the resolution of spectral imagers.In our future work we will build upon the methods that we have discussed here to exploit the spatial or temporal correlations in the data.

Appendix 1: Proof of Theorem 1
Using linear algebra and matrix theory, it is possible to show that if satisfies (3).c In particular, we can substitute (24) in (3) to verify that the proposed construction of Φ satisfies (3).
Observe that C Φ = ΦΣ b Φ T + σ 2 I −1/2 can be written in terms of (24) as follows: where the third-to-last equation follows from the definition of B and (25) follows from the fact that B is symmetric and positive definite.If B is positive definite, then B −1 is positive definite as well and can be decomposed as , where the matrix square root B −1/2 is symmetric and positive definite.By substituting ( 25) and ( 24) in (3), we have A sufficient condition for B to be positive definite can be derived as follows.
To ensure positive definiteness of B, we must have for any nonzero x ∈ R K .Note that since Σ b is positive semidefinite, x T AΣ b A T x ≥ 0. However, the right hand side of ( 26) is > 0 only if the spectral norm of T and Σ b = λ max , where λ max is the largest eigenvalue of Σ b .To ensure AΣ b A T < 1, A 2 λ max has to be < 1, which leads to the result of Theorem 1.

Appendix 2: Proof of Theorem 2
The proof of Theorem 2 adapts the proof techniques from [48] to nonidentical independent hypothesis tests.We begin by expanding the pFDR definition in (8) as follows: Observe that R (Γ ) = k implies that there exists some subset is the complement of Γ (j) , denote the significance region that corresponds to set S k , and T = (y 1 , . . ., y M ) be a set of test statistics corresponding to each hypothesis test.Considering all such subsets we have pFDR By plugging in the definition of V ({Γ i }) from ( 9), we have for all u ∈ S k since the tests are independent of each other given A. The posterior probability P H for the i th hypothesis test can be expanded using Bayes' rule as where f i = arg max f ( ) ∈D P f * i = f ( ) y i , α i , A .To upper bound the numerator of ( 29), consider the probability of misclassification given by (P e ) i = P f i = f * i where f * i = f (j) ∈ D, which can be expanded as follows: The denominator term in ( 29) can be expanded as follows: Observe that is nonnegative, and Substituting ( 30) and ( 31) in (29), By substituting (32) in ( 27) and ( 28) we have:

Appendix 3: Proof of Theorem 3
The proof is via a random selection technique, similar to random coding arguments common in information theory.Specifically, we will draw a K × N sensing matrix A at random from a particular distribution and then show that, for , N, and K satisfying the conditions of the theorem, the probability that the conclusions of the theorem will fail to hold for this randomly chosen A is strictly smaller than unity.This will imply that the conclusions of the theorem must be true for at least one (deterministic) realization of A.
We begin by specifying all the relevant random variables: • f * 1 , . . ., f * M are i.i.d.random variables taking values in the dictionary D = {f (1) , . . ., f (m) } with probabilities p

We assume that {f
, and G are mutually independent, and we will denote by P their joint probability http://asp.eurasipjournals.com/content/2012/1/205distribution.Finally, we let A = 1 √ K G and consider the observation model where α 1 , . . ., α M > 0 are the given signal strengths.
We first consider the case when α 1 = • • • = α M = α.Given , N, and K, we define the following two error events: , and where, for each i ∈ {1, . . ., M}, f i is defined according to (12).Note that, since we have assumed that the α i 's are equal and all the pairs (f * i , n i ), i ∈ {1, . . ., M}, are i.i.d., We will now prove that The union bound gives P(E 1 ∪ E 2 ) ≤ P(E 1 ) + P(E 2 ).First, we bound P(E 1 ).To do that, we use the following concentration result for Gaussian random matrices [58]: for any t ≥ 0, Letting t = ( √ K + √ N) and using the fact that t 2 ≥ (K + N) 2 , we get Next, we bound P(E 2 ).To that end, we use the following result, which is a straightforward extension of ([5], Theorem 1) to nonequiprobable dictionary elements: Lemma 1 (Compressive classification error).Consider the problem of classifying a signal of interest f * ∈ D = {f (1) , . . ., f (m) } to one of m known target classes by making observations of the form y = αAf * + n where n ∼ N 0, σ 2 I , given the knowledge of the dictionary D, prior probabilities p (j) for j ∈ {1, • • • , m}, sensing matrix A, and the noise variance σ 2 .If the entries of A are drawn i.i.d.from N (0, 1/K) independently of f * and n, and the estimate f is obtained according to (12), then where the probability is taken with respect to the distributions underlying f * , A, and n.
Because of (13a), the right-hand side of ( 35) is less than 1− −p max , which is strictly positive by hypothesis.Thus, from the fact that and from (34), it follows that there exists at least one deterministic choice of the K × N sensing matrix A * , such that: where, for a given choice of A, (P e ) max (A) denotes the maximum probability of error defined in Theorem 2. Next, from (38a) and (13b) it follows that A * satisfies the conditions of Theorem 1. Finally, we use (11) to bound the worst-case pFDR achievable with A * .First of all, we note that the function U(x) = x 1−p max −x is twice differentiable and convex on the interval [ 0, 1 − p max ].Therefore, for any x ∈[ 0, 1 − p max ] and any h > 0 small enough so that x + h ∈[ 0, 1 − p max ], we have

Appendix 4: Proof of Theorem 4
We first prove this theorem assuming that {α i } are known and later extend to the case where { α i } are estimated from the observations.Let f i = arg min f ∈D f * i − f .The pvalue expression in (17) can be expanded as follows: Note that α i A(f * i − f i ) + n 2 is a noncentral χ 2 random variable with K degrees of freedom and a noncentrality parameter ν i = α i A f * i − f i 2 .Thus (40) can be written in terms of a noncentral χ 2 CDF F d 2 i ; K, ν i with parameter d 2 i .The upper and lower bounds on ν i can be obtained using the properties of the projection matrix A. Applying (18), we see that with high probability.Thus, since f * i − f ≤ τ for all f ∈ D under H 0i .When {α i } are estimated from the observations such that { α i } satisfy (19), we can write the p-value expression in (41) as follows: where ( 42) is due to the distance preservation property of A given in (18).Observe that α i α i f * i − f i 2 can be upper bounded as shown below: where third-to-last equation is due to the triangle inequality, second-to-last equation comes from the assumption that f * i = 1, and the last inequality is due to (19).By applying this result to (42) and exploiting the fact that f * i − f ≤ τ under H 0i for some f ∈ D, we have

Endnotes
a Note that τ cannot exceed √ 2 because we assume that all targets of interest, including those in D and the actual target f * , are unit-norm.http://asp.eurasipjournals.com/content/2012/1/205b The anomaly detection problem discussed here is more accurately described as target detection in the classical detection theory vocabulary.However, in recent works [24,25], the authors assume that the nominal distribution is obtained from training data and a test sample is declared to be anomalous if it falls outside of the nominal distribution learned form the training data.Our work is in a similar spirit where we learn our dictionary from training data and label any test spectrum that does not correspond to our dictionary as being anomalous.c The authors would like to thank Prof. Roummel Marcia for fruitful discussions related to this point.

Figure 1
Figure 1 Compressive target detection results under the AK ({α i } known) and AU ({α i } unknown) cases respectively as a function of K. (a)Comparison of the worst-case empirical pFDR curves with the theoretical bounds when SNR is high.(b) Comparison of the results obtained by the proposed method using projection measurements using Φ designed according to(24), Φ chosen at random, and the ones using downsampled measurements (DM) when the SNR is low.

Figure 2
Figure 2 Comparison of the performances of the proposed anomaly detector using a random Φ, the proposed anomaly detector using the designed Φ in (24) and the GLRT-based method operating on downsampled data for different values of K when α * i ∈ U [ 2, 3] and α i = α * i √ K .

Figure 3
Figure 3 Anomaly detection results corresponding to real AVIRIS data for a fixed FDR control of 0.01.

− p min p min 1
)(P e ) max (A * ) ≤ 1 (39)sp.eurasipjournals.com/content/2012/1/205Thenfrom(13a)wehavex+h≤ 1 − − p max < 1 − p max , and from (13c) we have x + h ≥ 0. Hence, using(39)and simplifying, we obtain the bound pFDR max (A * ) ≤ This proves the theorem for the caseα 1 = • • • = α M = α.To handle the case when the α i 's are distinct, we simply let i * arg min i∈{1,...,M} α i and replace the definition of the error eventE 2 with E 2 = { f i * = f * i * }.Then the same argument goes through, except that instead of (34) we use the boundP( f i = f * i |A) ≤ P( f i * = f * i * |A) = P(E 2 |A), ∀i = i *which follows from the following argument.First of all, we can replace the observation model with the equivalent model y httpi = Af * i + n i , i ∈ {1, . .., M} where n i = 1 α i n i ∼ N (0, 1 α 2 i I).Secondly, from the fact that α i ≥ α i * ≡ α min for any i = i * it follows that n i * is equal in distribution to n i + n i , where n i ∼ N 0, 1 min is independent of n i .This implies that the i * th observation is the noisiest, and the corresponding MAP estimate f i * has the largest probability of error.