 Research
 Open Access
 Published:
Partially sparse imaging of stationary indoor scenes
EURASIP Journal on Advances in Signal Processing volume 2014, Article number: 100 (2014)
Abstract
In this paper, we exploit the notion of partial sparsity for scene reconstruction associated with throughthewall radar imaging of stationary targets under reduced data volume. Partial sparsity implies that the scene being imaged consists of a sparse part and a dense part, with the support of the latter assumed to be known. For the problem at hand, sparsity is represented by a few stationary indoor targets, whereas the high scene density is defined by exterior and interior walls. Prior knowledge of wall positions and extent may be available either through building blueprints or from prior surveillance operations. The contributions of the exterior and interior walls are removed from the data through the use of projection matrices, which are determined from wall and cornerspecific dictionaries. The projected data, with enhanced sparsity, is then processed using l_{1} norm reconstruction techniques. Numerical electromagnetic data is used to demonstrate the effectiveness of the proposed approach for imaging stationary indoor scenes using a reduced set of measurements.
1 Introduction
The ultimate objective of achieving actionable intelligence in an efficient and reliable manner is faced with a host of challenges underlying the urban sensing and throughthewall radar imaging (TWRI) applications [1–9]. First and foremost is the increasing demand on radar systems to deliver highresolution images in both range and crossrange, which requires the use of wideband signals and large array apertures, respectively. Second, the backscatter from the first wall, which is the exterior wall of the building being imaged, is much stronger than the returns from the interior of the building. This is because the signal undergoes attenuation in the wall materials and the more walls the signal penetrates to reach the indoor targets, the weaker are the returns. Therefore, the clutter caused by the wall backscatter can significantly contaminate the radar data and hinder the main intent of providing enhanced system capabilities for imaging of building interiors and detection and localization of stationary indoor targets.
Recently, compressive sensing (CS) has been used for efficient data acquisition in radar systems in general [10–14] and in urban radar systems in particular [15–19]. For urban radar systems, removal of clutter and stationary targets via change detection or exploitation of sparsity in the Doppler domain readily enables the application of CS techniques for moving target detection inside buildings [20–23]. However, these means are not available for detection and localization of stationary targets of interest, thereby requiring significant mitigation of wall reflections.
Several approaches have been proposed for dealing with front wall returns under full data volume without the need for reference or background data [8, 24–26]. In [8], the wall parameters, such as thickness and dielectric constant, are estimated from first wave arrivals and then used to model and subtract the front wall contributions from the received data. The approach in [24] for wall clutter mitigation is based on spatial filtering. It utilizes the strong similarity between wall electromagnetic (EM) responses, as viewed by different antennas along the entire physical or synthesized array aperture. A spatial filter is applied to remove the dc component corresponding to the constanttype radar return associated with the front wall. The subspace decomposition method, presented in [25, 26], utilizes not only the approximately identical wall scattering characteristics across the array elements but also the higher strength of the wall reflections compared to that of target reflections. When singular value decomposition (SVD) is applied to the measured data matrix, the wall subspace can be captured by the singular vectors associated with the dominant singular values. As a result, the wall contributions can be removed by projecting the data measurement vector at each antenna on the wall orthogonal subspace. It is noted that as the roundtrip signal traveling times from the antennas to each interior wall, which is parallel to the front wall, are constant across the array aperture, both spatial filtering and the subspace decomposition methods will also mitigate returns from interior parallel walls as long as they are not shadowed by other contents of the building [27].
Both spatial filtering and subspace projection approaches have been shown to be equally effective for synthetic aperture radar (SAR) imaging under reduced data volume, provided that the same reduced set of frequencies or time samples is used at each available antenna position [28, 29]. Requiring the same frequency observations or time samples across all antennas is very restrictive and may not always be feasible. For example, some individual frequencies or frequency subbands may be unavailable due to competing wireless services or intentional interferences. Further, antenna positions may signify radar units operating independently, each with a separate frequency band to avoid crossinterference. In this paper, we propose an alternate scheme for imaging of stationary indoor scenes which overcomes this limitation of wall clutter mitigation techniques under reduced data volume by exploiting prior knowledge of the room layout. This information may be available either through building blueprints or from prior surveillance operations specifically dedicated to determining the building layout. We consider the scene being imaged to be partially sparse. That is, the scene consists of two parts, one of which is sparse and contains the stationary indoor targets of interest while the other, corresponding to the exterior and interior walls, is dense with known support [30, 31]. We focus on steppedfrequency SAR operation and assume that few frequency observations are available, which could be the same or different from one antenna position to another, constituting the set of reduced spatial measurements. We employ projection matrices that are determined from wall and cornerspecific scattering responses to remove the exterior and interior wall contributions from the measurements. In so doing, we enable the application of conventional sparse reconstruction schemes to obtain an image of the sparse part of the scene containing the stationary indoor targets. We demonstrate the effectiveness of the partial sparsitybased approach for reconstruction of stationary throughthewall scenes using numerical electromagnetic data of a singlestory building for both cases of having the same reduced set of frequencies at each of the available antenna locations and also when different frequency measurements are employed at different antenna locations.
It is noted that, as an alternative to the proposed approach, the wall and corner responses can be modeled and then subtracted from the received data. However, this approach has two issues. First, it is very sensitive to phase errors. Second, it may not always work since some corners and parts of interior walls may be shadowed by objects in the interior of the building. Additionally, it is important to draw a distinction between the proposed approach and the subspace projection approach for wall removal [25, 26]. Although both approaches involve data projections, they exploit fundamentally different characteristics of the data measurements to achieve the desired objective. The subspace projection approach does not require knowledge of wall positions and extent. However, it assumes nominal geometry of the walls. In this respect, it relies on a specific radar configuration, whose defining characteristics are the constant distance of each antenna from the walls and normal incidence illuminations. This ensures approximately constant wall contributions in the data received at all antennas. The partial sparsity approach, on the other hand, relies on accurate knowledge of the building geometry to create wall and cornerspecific dictionaries, which are subsequently used for mitigating the contributions of exterior and interior walls. It does not, however, demand the invariance of wall returns across the partial or entire radar aperture. The impact of these fundamental differences on the performance of the two approaches is highlighted in Section 5.
The remainder of this paper is organized as follows. Section 2 presents the signal model under the assumption of known support of the exterior and interior walls. The wall contribution removal technique and scene reconstruction are discussed in Section 3. Section 4 evaluates the performance of the proposed partially sparse throughthewall scene reconstruction approach using numerical EM data of a singlestory building. Performance comparison with the subspace projection approach is also provided. Conclusions are drawn in Section 5.
2 Signal model
In this section, we develop the forward scattering model for throughthewall radar imaging. The model is based on physical optics and any associated nonlinearities are ignored [32].
Consider a monostatic SAR with N antenna positions located along the xaxis parallel to a homogenous front wall. The transmit waveform is assumed to be a steppedfrequency signal of M frequencies, which are equispaced over the desired bandwidth ω_{M − 1} − ω_{0},
where ω_{0} is the lowest frequency in the desired frequency band and Δω is the frequency step size. The scene behind the front wall is assumed to be composed of P point targets, L1 interior walls, which are parallel to the front wall and to the radar scan direction, and K corners corresponding to the junctions of two walls perpendicular to each other. It is noted that, due to the specular nature of the wall reflections, a SAR system located parallel to the front wall will only be able to receive backscattered signals from interior walls, which are parallel to the front wall. The contributions of walls perpendicular to the front wall will be captured primarily through the backscattered signals from the corners [33, 34].
The component of the received signal corresponding to the m th frequency at the n th antenna position, with phase center at x_{ tn } = (x_{ tn }, 0), due to the P point targets is given by [35, 36]
where σ_{ p } is the complex amplitude corresponding to the p th target return and τ_{p,n} is the twoway traveling time between the n th antenna and the p th target. The reflections from the L walls measured at the n th antenna location corresponding to the m th frequency can be expressed as [24]
where σ_{w,l} is the complex amplitude associated with the l th wall and τ_{w,l} is the twoway traveling time of the signal from the n th antenna to the l th wall. Note that since the scan direction is parallel to the walls, the delay τ_{w,l} does not depend on the variable n and is a function only of the downrange distance between the l th wall and the antenna baseline. Finally, the reflections from the K corners measured at the n th antenna location corresponding to the m th frequency can be expressed as [33, 37]
where {\overline{\mathit{\sigma}}}_{\mathit{k}} is the complex amplitude, {\overline{\mathit{L}}}_{\mathit{k}} is the length, {\overline{\mathit{\theta}}}_{\mathit{k}} is the orientation angle of the k th corner, τ_{k,n} is the twoway propagation delay between the n th antenna and the k th corner, θ_{k,n} is the aspect angle associated with the k th corner and the n th antenna, and Γ_{k,n} is an indicator function which assumes a unit value only when the n th antenna illuminates the concave side of the k th corner. We note that each of the complex amplitudes σ_{p,}σ_{w,l}, and {\overline{\mathit{\sigma}}}_{\mathit{k}} in (2) to (4) contains contributions from freespace path loss, attenuation due to propagation through the wall(s), and the reflectivity of the corresponding scatterer. The n th received signal corresponding to the m th frequency is, thus, given by
Assume that the scene being imaged is divided into a finite number of gridpoints, say Q, in crossrange and downrange. Let z_{ n } represent the received signal vector corresponding to the M frequencies and the n th antenna location, and s be the concatenated Q × 1 scene reflectivity vector corresponding to the spatial sampling grid. Under the assumption that the building layout is known a priori, s can be expressed as ={\left[{\mathit{s}}_{1}^{\mathit{T}}\phantom{\rule{0.25em}{0ex}}{\mathit{s}}_{2}^{\mathit{T}}\right]}^{\mathit{T}}, where {\mathit{s}}_{1}\in {\mathrm{\u2102}}^{{\mathit{Q}}_{1}} is the dense part whose support is known and {\mathit{s}}_{2}\in {\mathrm{\u2102}}^{{\mathit{Q}}_{2}},{\mathit{Q}}_{2}=\mathit{Q}{\mathit{Q}}_{1}, is the sparse part. Note that s_{1} corresponds to the walls that are parallel to the antenna baseline. Further, since the wall junctions lie along the parallel walls, the corner locations would correspond to the support of a subset of s_{1}, say {\overline{\mathit{s}}}_{1}\in {\mathrm{\u2102}}^{\mathit{R}},\mathit{R}<{\mathit{Q}}_{1}. Then, using (2) to (5), we obtain the matrixvector form
where A_{ n }, B_{ n }, and C_{ n } are the dictionary matrices corresponding to the wall, corner reflector, and point target, respectively. The matrix C_{ n } is of dimension M × Q_{2} with its (m, q_{2})th element given by
where {\mathit{\tau}}_{{\mathit{q}}_{2},\mathit{n}} is the twoway traveling time between the n th antenna and the q_{2}th gridpoint of the sparse part. The wall dictionary A_{ n } is an M × Q_{1} matrix, whose (m, q_{1})th element takes the form [38]
In (8), {\mathit{y}}_{{\mathit{q}}_{1}} represents the downrange coordinate of the q_{1}th gridpoint in the dense part, and {\mathrm{\Im}}_{{\mathit{q}}_{1},\mathit{n}} is an indicator function, which assumes a unit value only when the q_{1}th gridpoint lies in front of the n th antenna, as illustrated in Figure 1. That is, if {\mathit{x}}_{{\mathit{q}}_{1}}\phantom{\rule{0.12em}{0ex}} represents the crossrange coordinate of the q_{1}th dense gridpoint and δx represents the crossrange sampling step, then {\mathrm{\Im}}_{{\mathit{q}}_{1},\mathit{n}}=1 provided that {\mathit{x}}_{{\mathit{q}}_{1}}\frac{\mathit{\delta x}}{2}\le {\mathit{x}}_{\mathit{tn}}\le {\mathit{x}}_{{\mathit{q}}_{1}}+\frac{\mathit{\delta x}}{2}. The corner dictionary B_{ n } is an M × R matrix whose (m, r)th element is given by
Equation 6 considers the contribution of only one antenna location. Stacking the measurement vectors corresponding to all N antennas to form a tall vector,
we obtain the linear system of equations
where
The vector z contains the full dataset corresponding to the N antenna locations and the M frequencies. For the case of reduced data volume, consider ξ, which is a J (<<MN)dimensional vector consisting of elements randomly chosen from z as follows:
In (13), Φ is a J × MN measurement matrix of the form
where ‘kron’ denotes the Kronecker product, {\mathit{I}}_{{\mathit{J}}_{1}}\phantom{\rule{0.12em}{0ex}} is a J_{1} × J_{1} identity matrix, ψ is a J_{2} × N measurement matrix constructed by randomly selecting J_{2} rows of an N × N identity matrix, and φ_{ n }, n = 0, 1,…, N − 1, is a J_{1} × M measurement matrix constructed by randomly selecting J_{1} rows of an M × M identity matrix. We note that ψ determines the reduced antenna locations, whereas φ_{ n } determines the reduced set of frequencies corresponding to the n th antenna location.
3 Wall contribution removal and scene reconstruction
Given the reduced measurement vector ξ and knowledge of the support of the walls and corners, the goal is to reconstruct the sparse part of the image where the targets of interest are located. Towards this goal, we first need to remove the contributions of the dense part of the scene from ξ. Let P_{ A } be the matrix of the orthogonal projection from ℂ^{Q} onto the orthogonal complement of the range space of the matrix Φ A. If Φ A is a full rank matrix, then P_{ A } can be expressed as
where I_{ J } is a J × J identity matrix and (Φ A)^{†} denotes the pseudoinverse of (Φ A). On the other hand, if Φ A has a reduced rank, then we have to resort to the SVD of Φ A to obtain the matrix P_{ A } as
where U_{ A } is the matrix consisting of the left singular vectors corresponding to the zero singular values and the superscript ‘H’ denotes the Hermitian operation. Applying the projection matrix P_{ A } to the observation vector ξ, we obtain
Next, consider the projection matrix P_{ B } given by
where U_{ B } is the matrix consisting of the left singular vectors corresponding to the zero singular values of the matrix P_{ A }Φ B. Application of P_{ B } to the measurement vector ξ_{ A } leads to
Thus, after the sequential application of the two projection matrices, the measurement vector ξ_{ BA } contains contributions from only the sparse image part, s_{2}, which can then be recovered by solving the problem
The problem in (20) belongs to the classical setting of CS and, thus, can be solved using convex relaxation, greedy pursuit, or combinatorial algorithms [39–43]. In this work, we choose orthogonal matching pursuit (OMP), which is an iterative greedy algorithm [44].
It is noted that the dimensionality of the orthogonal complements of the range spaces of the matrices Φ A and Φ B is at least J − Q_{1} and J − R, respectively. Further, if access to full data volume is available, the proposed wall removal procedure can also be applied to the full data vector z with appropriate projection matrices determined from the dictionary matrices A and B instead of Φ A and Φ B, respectively. The wallfree data can then be processed using conventional image formation techniques [45].
4 Simulation results
In this section, we present scene reconstruction results for the partial sparsity technique using numerical EM data and provide performance comparison with the subspace projection approach for both full and reduced data volumes. For the reduced data volume, we consider both cases of having different frequency measurements at different available antenna locations and also when the same reduced set of frequencies is employed at each of the available antenna locations. Note that for the subspace projectionbased wall mitigation CS approach proposed in [28], the former casts a more challenging problem than the latter, as it is not amenable to wall removal using direct implementation of the subspace projection technique. Instead, the range profile at each employed antenna location first needs to be reconstructed through l_{1} norm minimization using the reduced frequency set [28]. Then, the Fourier transform of each reconstructed range profile is taken to recover the full frequency data measurements at each antenna location. Direct application of the subspace projection technique can then proceed, followed by the scene reconstruction.
4.1 Electromagnetic modeling
The simulation is based on Xpatch®, developed by SAIC/DEMACO (Champaign, IL, USA), which is a computational EM code implementing an approximate ray tracing/physical optics computational approach. We created the computer model of a singlestory building, with overall dimensions of 7 m × 10 m × 2.2 m, containing four humans (labeled 1 through 4) and several furniture items, as shown in Figure 2. The origin of the coordinate system was chosen to be in the center of the building, with the xaxis and the yaxis oriented as shown in Figure 2b. The exterior walls were made of 0.2mthick bricks and had glass windows and a wooden door. The interior walls were made of 5cmthick Sheetrock and had a wooden door. The ceiling/roof is flat, made of a 7.5cmthick concrete slab. The entire building is placed on top of a dielectric ground plane. The furniture items, namely, a bed, a couch, a bookshelf, a dresser, and a table with four chairs, were made of wood, while the mattress and cushions were made of a generic foam/fabric material. Humans 1 through 4 were positioned at various locations in the interior of the building with 45°, 0°, −20°, and 10° azimuthal orientation angles, respectively. Note that an orientation angle of 0° corresponds to the human facing along the positive x direction and the positive angles correspond to a counterclockwise rotation in the horizontal plane. Human 3, positioned inside the interior room, was carrying an AK47 rifle. The human model was made of a uniform dielectric material with properties close to those of the skin and is described in [46]. The human body radar cross section (RCS) depends on the aspect angle but is generally bounded between −10 and 0 dBsm. Interestingly, the average human body RCS is fairly constant over the frequency range considered in this paper. More detailed results on the human body radar signature can be found in [46]. The AK47 model is made of metal and wood and is described in [47, 48]. The dielectric properties of the various materials employed are listed in Table 1.
A 6mlong synthetic aperture line array, with an interelement spacing of 2.54 cm and located parallel to the front of the building at a standoff distance of 4 m, was used for data collection. Monostatic operation was assumed. The antenna had a 3dB beamwidth of 60° in both elevation and azimuth and was positioned 0.5 m above the ground plane. The antenna boresight was aimed perpendicular to the exterior wall. A steppedfrequency signal covering the 0.7 to 2GHz frequency band with a step size of 8.79 MHz was employed. Thus, at each of the 239 scan positions, the radar collected 148 frequency measurements over the 1.3GHz bandwidth.
4.2 Image reconstruction under full data volume
The region to be imaged was chosen to be 9 m × 12 m centered at the origin and divided into 121 × 161 pixels, respectively. Figure 3a shows the image obtained with backprojection using the full raw dataset. In this figure and all subsequent figures in this paper, we plot the image intensity with the maximum intensity value in each image normalized to 0 dB. The Hanning window was applied to the data along the frequency dimension in order to reduce the range sidelobes in the image. The humans in the image are indicated by red circles. We can clearly see the front wall, some of the corners, and humans 1 and 2. Human 3 in the interior room is barely visible due to the additional EM loss as the transmitted signal has to penetrate through both the exterior and interior walls. Likewise, it is a challenge to detect human 4, who is the farthest away from the front wall. Figure 3b shows the backprojected image after masking out the dense regions with known support. Although all the targets are visible in the masked image, the image is cluttered due to the presence of the wall and corner sidelobes.
Next, we reconstructed the scene using the subspace decompositionbased wall mitigation approach. The first two dominant singular vectors of the frequency vs. antenna raw data matrix were used to reconstruct the wall subspace. The wall subspace dimension was selected using the method reported in [49]. Finally, backprojection was performed on the wall clutter mitigated data, and the corresponding image is shown in Figure 4a. We observe that although the stationary targets are more visible and the front and interior wall reflections are successfully removed, the corners indicating the presence of doors and windows are still present. So is most of the back wall due to shadowing effects. The approach also removed the reflections from the edge of the couch, and only the couch corners survive. More importantly, the presence of discontinuities in the front wall (windows and door) causes the subspace decompositionbased approach to introduce artifacts in the image, indicated by the red rectangles. Such artifacts in the interior of the building are more visible in Figure 4b, which shows the image after masking out the dense regions with known support. These artifacts are attributed to the fact that the subspace projection scheme assumes the wall response to be the same from one antenna to the other, which is violated by the presence of windows and doors.Finally, Figure 5 presents the backprojected image obtained using the proposed approach. The dense part of the scene, corresponding to the building layout (exterior and interior walls parallel to the array and corners), consisted of 7,196 pixels, while the sparse part of the scene consisted of the remaining 12,285 pixels. Compared to Figures 3b and 4b, the image in Figure 5 is the least cluttered since the wall sidelobes, in particular near the back wall, are absent. All of the humans and the furniture items are clearly visible in the image. We, therefore, conclude that the proposed approach provides superior performance compared to the subspace decompositionbased wall mitigation approach under the full data volume.
In addition to the visual inspections, we also assess the performance of the various methods in terms of the targettoclutter ratio (TCR), which is defined as the ratio between the average pixel power I_{ t } in the target region to the average pixel power I_{ c } in the clutter region of the reconstructed image {\hat{\mathit{s}}}_{2}
where R_{ t } is the target region, R_{ c } is the clutter area, N_{ t } is the number of pixels in the target area, and N_{ c } is the number of pixels in the clutter region. The target area is composed of regions containing the four humans, and the remaining pixels of {\hat{\mathit{s}}}_{2} constitute the clutter region. Note that we consider furniture reflections as unwanted returns, and accordingly, they are treated as clutter. Table 2 shows the TCR values for the reconstructed images of Figures 3b, 4b, and 5. As expected, the TCR is improved using the proposed scheme over the subspace projectionbased wall clutter mitigation scheme.
4.3 Image reconstruction under different sets of reduced frequencies at each available antenna location
Conventional image formation techniques, such as backprojection, compromise the image quality when a reduced number of measurements is considered, thereby impeding the detection of targets behind the wall in the image domain. This is illustrated in Figure 6, wherein we used 118 randomly selected frequencies (79.7% of 148) and 79 randomly chosen antenna locations (30% of 239) for backprojection, which collectively represent 26% of the total data volume. The corresponding spacefrequency sampling pattern is shown in Figure 7, where the vertical axis represents the antenna location and the horizontal axis represents the frequency. The filled boxes represent the data samples constituting the reduced set of measurements.Next, we reconstructed the scene using the partial sparsity approach with 26% data volume. The number of OMP iterations, usually associated with the sparsity level of the scene, was set to 10. In this case, and for all subsequent sparse imaging results, each imaged pixel is the result of averaging 200 runs, with a different random selection for each run. The partial sparsitybased reconstruction of the sparse part of the scene is shown in Figure 8a. We observe from Figure 8a that the partial sparsitybased scheme was able to detect and localize humans 1 through 3 successfully, while it missed human 4. In addition, some clutter (arising from the left chair and table) and background noise is also visible in the reconstructed image.
For comparison, we also performed scene reconstruction using the subspace projectionbased wall mitigation CS approach of [28] with 26% data volume. Full frequency data measurements were first recovered from the l_{1} normreconstructed range profiles at each considered antenna location. The number of OMP iterations was set to 100 for each range profile reconstruction. This is because the presence of the wall return and clutter renders the range profile quite dense. The subspace projection approach was then applied wherein the first two dominant singular vectors of the 148 × 79 data matrix were used to reconstruct the wall subspace. Finally, standard l_{1} norm image reconstruction was performed on the wall clutter mitigated full frequency recovered data to form an image of the sparse part of the scene, shown in Figure 8b. Similar to the partial sparsity approach, the number of OMP iterations in this case was chosen to be 10. We observe from Figure 5b that human 1 was detected, human 2 was barely detected, while humans 3 and 4 were both missing from the reconstruction. Moreover, significantly more clutter and noise was reconstructed compared to Figure 8a. We, therefore, conclude that the partial sparsitybased approach compared to the subspace projectionbased wall mitigation CS approach provides superior performance for the same reduced data volume when different sets of frequencies are employed at the available antennas. This is also confirmed by the corresponding TCR values provided in Table 3 (first two rows).
4.4 Image reconstruction under the same set of reduced frequencies at each available antenna location
We now proceed with image reconstruction when the same reduced set of frequencies is employed at each of the available antenna locations. The corresponding spacefrequency sampling pattern is shown in Figure 9. We use the same set of 118 randomly selected frequencies (79.7% of 148) at each of the 79 randomly chosen antenna locations (30% of 239). Figure 10a presents the result of the partial sparsitybased approach with the number of OMP iterations set to 10. We observe from Figure 10a that humans 1 through 3 were successfully localized, while human 4 was missed. In addition, some clutter arising from the furniture and noise were also reconstructed.
We next applied the subspace projectionbased wall mitigation approach directly to the reduced dimension data matrix, 118 × 79 instead of 148 × 239 [28]. The wall suppressed data was then used to obtain the l_{1} normreconstructed image of the sparse part of the scene, shown in Figure 10b, obtained using OMP with 10 iterations. We observe that the subspace projection scheme was able to detect and localize humans 1, 2, and 4 successfully, while human 3 was barely detected. Residual wall clutter and some of the furniture returns are also visible in the reconstructed image. We, therefore, conclude that the partial sparsity and the subspace decompositionbased wall mitigation CS approaches provide comparable performance for the same reduced data volume when the same set of frequencies is employed at the available antennas. This is validated by Table 4 which shows that the two schemes have comparable TCRs.
4.5 A note on the number of OMP iterations
OMP, like other greedy iterative algorithms, requires the specification of the scene sparsity for exact reconstruction [44]. In most practical situations, including throughthewall imaging, this information is not available a priori. Therefore, the stopping criterion based on the fixed number of iterations, which is tied to the scene sparsity, is heuristic. Figure 11a,b shows the reconstruction result for the scene in Figure 2 using 26% of the data volume with the number of OMP iterations chosen to be 25 and 45, respectively. The spacefrequency sampling pattern of Figure 7 was employed. In both cases, humans 1 through 3 are clearly visible. However, a higher amount of clutter and background noise is reconstructed with increasing number of iterations. The corresponding TCR values are provided in Table 3 (rows 3 and 4).
Use of crossvalidation has been proposed to prevent early/late termination of greedy reconstruction algorithms [50]. Crossvalidation is a statistical technique that separates a dataset into a training/estimation set and a test/crossvalidation set. The test set is used to prevent underfitting/overfitting on the training set. The crossvalidationbased OMP reconstruction result using 26% of the data volume is depicted in Figure 12, with one fifth of the measurements used for crossvalidation. We observe that the crossvalidationbased approach fails to solve the problem and only humans 1 and 2 are visible in the reconstructed image. This is because the signal strength from humans 3 and 4 is either comparable to or weaker than those from sources of clutter, and the crossvalidationbased approach regards humans 3 and 4 as part of the background noise level [51]. Although only two of the four humans are detected, the corresponding TCR value is quite high, as shown in Table 3 (last row). This is because very little clutter is reconstructed. Various adaptive approaches have recently been proposed to counter the problem of low signaltonoiseandclutter ratio [51, 52]. The offering of these schemes to the problem at hand remains to be explored.
5 Conclusions
In this paper, we applied partial sparsity to scene reconstruction associated with throughthewall radar imaging of stationary targets. Partially sparse recovery deals with the case when it is known a priori that part of the scene being imaged is dense while the rest is sparse. For the underlying problem, the dense part of the scene corresponds to the building layout and the support of the corresponding part of the image is assumed to be known beforehand. This knowledge may be available either through building blueprints or from prior surveillance operations. Using numerical EM data of a singlestory building, we demonstrated the effectiveness of the partially sparse reconstruction in detecting and locating stationary targets in throughthewall scenes while achieving a sizable reduction in the data volume.
Abbreviations
 CS:

compressive sensing
 EM:

electromagnetic
 OMP:

orthogonal matching pursuit
 RCS:

radar cross section
 SAR:

synthetic aperture radar
 SVD:

singular value decomposition
 TCR:

targettoclutter ratio
 TWRI:

throughthewall radar imaging.
References
Amin MG (Ed): Through the Wall Radar Imaging. CRC, Boca Raton; 2011.
Amin MG, Sarabandi K: Remote sensing of building interiors. IEEE Trans. Geosci. Rem. Sens. 2009, 47(5):12671420.
Lai CP, Narayanan RM: Ultrawideband random noise radar design for throughwall surveillance. IEEE Trans. Aerosp. Electron. Syst. 2010, 46(4):17161730.
Chang PC, Burkholder RL, Volakis JL, Marhefka RJ, Bayram Y: Highfrequency EM characterization of throughwall building imaging. IEEE Trans. Geosci. Rem. Sens. 2009, 47(5):13751387.
Ahmad F, Amin MG, Zemany PD: Dualfrequency radars for target localization in urban setting. IEEE Trans. Aerosp. Electron. Syst. 2009, 45(4):15981609.
Le C, Dogaru T, Nguyen L, Ressler MA: Ultrawideband (UWB) radar imaging of building interior: measurements and predictions. IEEE Trans. Geosci. Rem. Sens. 2009, 47(5):14091420.
Thajudeen C, Hoorfar A, Ahmad F, Dogaru T: Measured complex permittivity of walls with different hydration levels and the effect on power estimation of TWRI target returns. Progress Electromagnet. Res. B 2011, 30: 177199.
Dehmollaian M, Sarabandi K: Refocusing through the building walls using synthetic aperture radar. IEEE Trans. Geosci. Rem. Sens. 2008, 46(6):15891599.
Soldovieri F, Solimene R: Throughwall imaging via a linear inverse scattering algorithm. IEEE Geosci. Remote Sens. Lett. 2007, 4(4):513517.
Baraniuk R, Steeghs P: Compressive radar imaging. Proc IEEE Radar Conference, Waltham, 17–20 Apr 2007 128133.
Herman M, Strohmer T: Highresolution radar via compressed sensing. IEEE Trans. Signal Process 2009, 57(6):22752284.
Gurbuz A, McClellan J, Scott W Jr: Compressive sensing for subsurface imaging using ground penetrating radar. Signal Process. 2009, 89(10):19591972. 10.1016/j.sigpro.2009.03.030
Ender JHG: On compressive sensing applied to radar. Signal Process. 2010, 90(5):14021414. 10.1016/j.sigpro.2009.11.009
Potter LC, Ertin E, Parker JT, Cetin M: Sparsity and compressed sensing in radar imaging. Proc. of the IEEE 2010, 98(6):10061020.
Yoon YS, Amin MG: Compressed sensing technique for highresolution radar imaging, in Proc. SPIE, vol. 6968. SPIE, Bellingham; 2008:69681A.
Huang Q, Qu L, Wu B, Fang G: UWB throughwall imaging based on compressive sensing. IEEE Trans. Geosci. Rem. Sens. 2010, 48(3):14081415.
Leigsnering M, Debes C, Zoubir AM: Compressive sensing in throughthewall radar imaging. Proc IEEE Int. Conf. Acoustics, Speech, and Signal Process, Prague, 22–27 May 2011 40084011.
Solimene R, Ahmad F, Soldovieri F: A novel CSSVD strategy to perform data reduction in linear inverse scattering problems. IEEE Geosci. Remote Sens. Lett. 2012, 9(5):881885.
Amin MG, Ahmad F: Compressive sensing for throughthewall radar imaging. J. Electron. Imag. 2013., 22(3): doi: 10.1117/1.JEI.22.3.030901
Amin M, Ahmad F, Zhang W, Amin M, Ahmad F, Zhang W: A compressive sensing approach to moving target indication for urban sensing. Proc. IEEE Radar Conf Kansas City 23–27 May 2011 509512.
Ahmad F, Amin MG: Throughthewall human motion indication using sparsitydriven change detection. IEEE Trans. Geosci. Rem. Sens. 2013, 51(2):881890.
Ahmad F, Amin MG, Qian J: Throughthewall moving target detection and localization using sparse regularization. In Proc. SPIE, vol. 8365. SPIE, Bellingham; 2012.
Qian J, Ahmad F, Amin MG: Joint localization of stationary and moving targets behind walls using sparse scene recovery. J. Electron. Imag. 2013., 22(2): doi: 10.1117/1.JEI.22.2.021002
Yoon YS, Amin MG: Spatial filtering for wallclutter mitigation in throughthewall radar imaging. IEEE Trans. Geosci. Rem. Sens. 2009, 47(9):31923208.
Chandra A, Gaikwad D, Singh D, Nigam M: An approach to remove the clutter and detect the target for ultrawideband throughwall imaging. J Geophys. Eng. 2008, 5(4):412419. 10.1088/17422132/5/4/005
Tivive F, Bouzerdoum A, Amin M: An SVDbased approach for mitigating wall reflections in throughthewall radar imaging. Proc. IEEE Radar Conf., Kansas City, 23–27 May 2011 519524.
Ahmad F, Amin MG: Wall clutter mitigation for MIMO radar configurations in urban sensing. Proc. 11th Int. Conf. Information Science, Signal Process., and their Applications, Montreal, 2–5 July 2012
Lagunas E, Amin M, Ahmad F, Nájar M: Joint wall mitigation and compressive sensing for indoor image reconstruction. IEEE Trans. Geosci Rem. Sens. 2013, 51(2):891906.
Lagunas E, Amin M, Ahmad F, Nájar M: Wall mitigation techniques for indoor sensing within the CS framework. Proc. Seventh IEEE Workshop on Sensor Array and MultiChannel Signal Processing, Hoboken 17–20 June 2012.
Bandeira AS, Scheinberg K, Vicente LN: On partially sparse recovery (preprint 1113, Dept. of Mathematics, Univ. Coimbra, 2011). . Accessed 27 Feb 2014 http://www.optimizationonline.org/DB_FILE/2011/04/2990.pdf
Vaswani N, Lu W: ModifiedCS: modifying compressive sensing for problems with partially known support. IEEE Trans. Signal Process 2010, 58(9):45954607.
Leigsnering M, Amin MG, Ahmad F, Zoubir AM: Multipath exploitation and suppression for SAR imaging of building interiors. IEEE Signal Process Mag. 2014., 31(4): doi: 10.1109/MSP.2014.2312203
Lagunas E, Amin MG, Ahmad F, Najar M: Determining building interior structures using compressive sensing. J. Electron. Imag. 2013., 22(2): doi: 10.1117/1.JEI.22.2.02100
van Rossum W, de Wit J, Tan R: Radar imaging of building interiors using sparse reconstruction. Proc. 9th European Radar Conference, Amsterdam, 31 Oct–2 Nov 2012 3033.
Amin MG, Ahmad F: Wideband synthetic aperture beamforming for throughthewall imaging. IEEE Signal Process Mag. 2008, 25(4):110113.
Ahmad F, Amin MG, Kassam SA: A beamforming approach to steppedfrequency synthetic aperture throughthewall radar imaging. Proc. First IEEE Int. Workshop on Computational Advances in Multisensor Adaptive Process, Puerto Vallarta, 13–15 Dec 2005 2427.
Gerry M, Potter L, Gupta I, van der Merwe A: A parametric model for synthetic aperture radar measurements. IEEE Trans. Antenn. Propag. 1999, 47(7):11791188. 10.1109/8.785750
Ahmad F, Amin MG: Partially sparse reconstruction of behindthewall scenes, in Proc. SPIE, vol. 8365. SPIE, Bellingham; 2012:83650W.
Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, Cambridge; 2004.
Candes EJ, Tao T: Near optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory 2006, 52(12):54065425.
Chen SS, Donoho DL, Saunders MA: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 1999, 20(1):3361.
Mallat S, Zhang Z: Matching pursuit with timefrequency dictionaries. IEEE Trans. Signal Process 1993, 41(12):33973415. 10.1109/78.258082
Tropp JA: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50(10):22312242. 10.1109/TIT.2004.834793
Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53(12):46554666.
Ahmad F, Amin MG, Dogaru T: A beamforming approach to imaging of stationary indoor scenes under known building layout. Proc. IEEE 5th Int. Workshop on Computational Advances in MultiSensor Adaptive Process, St. Martin, 15–18 Dec 2013 105108.
Dogaru T, Nguyen L, Le C: Computer models of the human body signature for sensing through the wall radar applications (ARLTR4290, U.S. Army Research Lab, Adelphi, MD, 2007). . Accessed 27 Feb 2014 http://www.dtic.mil/dtic/tr/fulltext/u2/a473937.pdf
Dogaru T, Le C: Throughthewall small weapon detection based on polarimetric radar techniques (ARLTR5041, U.S. Army Research Lab, Adelphi, MD, 2009). . Accessed 27 Feb 2014 http://www.dtic.mil/dtic/tr/fulltext/u2/a510201.pdf
Ahmad F, Amin M: Stochastic model based radar waveform design for weapon detection. IEEE Trans. Aerosp. Electron. Syst. 2012, 48(2):18151826.
Tivive FHC, Amin MG, Bouzerdoum A: Wall clutter mitigation based on eigenanalysis in throughthewall radar imaging. Proc. 17th Int. Conf. on Digital Signal Process, Corfu, 6–8 July 2011
Boufounos P, Duarte M, Baraniuk R: Sparse signal reconstruction from noisy compressive measurements using cross validation. Proc. IEEE Workshop on Statistical Signal Process, Madison, 26–29 Aug 2007 299303.
Sun H, Nallanathan A, Jiang J, Poor HV: Compressive autonomous sensing (CASe) for wideband spectrum sensing. Proc. IEEE International Communications Conf., 10–15 June 2012 44424446.
Do TT, Lu G, Nguyen N, Tran TD: Sparsity adaptive matching pursuit algorithm for practical compressed sensing. Proc. 42nd Asilomar Conf. on Signals, Systems and Computers, 26–29 Oct 2008 581587.
Acknowledgements
This work is supported by ARO and ARL under contract W911NF1110536.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ahmad, F., Amin, M.G. & Dogaru, T. Partially sparse imaging of stationary indoor scenes. EURASIP J. Adv. Signal Process. 2014, 100 (2014). https://doi.org/10.1186/168761802014100
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168761802014100
Keywords
 Sparse reconstruction
 Partial sparsity
 Compressive sensing
 Throughthewall radar