In the preceding section, we found the best signals from within the parameterized signal sets ΩA0,…,ΩH0. Note that these signal sets ΩA0,…,ΩH0 are subsets of Ω0. We noticed the signal set 6D-OPFS (ΩA0) and the signal set proposed by Price (ΩD0) outperform the other parameterized signal sets for the cost functions we considered. However, the best overall signal from the signal set Ω0 may not lie in any of ΩA0,…,ΩH0. In order to characterize how good \(s_{A}^{*}(\beta), \ldots, s_{H}^{*}(\beta)\) and particularly the optimal signals \(s_{A}^{*}(\beta)\) and \(s_{D}^{*}(\beta)\) are, we are interested to estimate the fraction of signals in Ω0 that is outperformed by \(s_{A}^{*}(\beta)\) and \(s_{D}^{*}(\beta)\). To do this, we run an experiment by randomly generating signals in Ω according to a uniform distribution.
4.1 Random signal experiment design
In our experiment, we will draw L random signals and, assuming that none of these outperforms the reference signal s∗, assess what this tells us about the (true) fraction p of Ω0 that outperforms s∗. Let \({\hat {p}}\) be an estimate of p. If the probability of s∗ outperforming the L samples is 1−η (under the assumption that \({\hat {p}}\) is the fraction of Ω0 that outperforms s∗), we say that we have confidence η that \(p<{\hat {p}}\). Therefore, in order to have confidence at least as large as η, we desire
$$(1-{\hat{p}})^{L} \leq (1-\eta). $$
Taking natural logarithm on both sides and dividing by L yields
$$\ln(1-{\hat{p}}) \leq \frac{1}{L}\ln(1-\eta). $$
Since for p small \(\ln (1-{\hat {p}})\approx -{\hat {p}}\), we can rearrange this expression to yield the estimate \({\hat {p}}\) as a function of L and η as
$${\hat{p}}(L,\eta) = \frac{-\ln(1-\eta)}{L} $$
(the smallest value for \({\hat {p}}\) satisfying the inequality and thus achieving the desired confidence level). One may also express this as the number of trials L needed to attain confidence η for a probability size \({\hat {p}}\):
$$L({\hat{p}},\eta) = \frac{\ln(1-\eta)}{-{\hat{p}}} $$
(the smallest L satisfying the inequality). With a confidence level of η=95%, the above expressions yield (approximately)
$$\begin{array}{*{20}l} {\hat{p}}(L) &= 3/L; \\ L({\hat{p}}) &= 3/{\hat{p}}. \end{array} $$
The first of these equations corresponds to the “rule of three” [25].
4.2 Generation of random signals from Ω0
We wish to have a means to generate a random signal s(t) from within Ω0 according to a uniform probability distribution function. The time variable t is quantized so that
$$ t \in \{ -T/2, -T/2 + T/M, -T/2 + 2T/M, \ldots, T/2 \}. $$
(11)
We will also assume a quantized set of frequency values:
$$ f(t) \in \{ -B/2, -B/2 + B/N, -B/2 + 2B/N, \ldots, B/2 \}. $$
(12)
Let tm=−T/2+mT/M. With f(tm)=−B/2+n(m)B/N, we will for ease of notation refer to the pair (tm,f(tm)) as (m,n(m)). Given that a frequency f(t) passes through (m,n(m)), there is some finite number K of different ways for f(t) to continue to f(T/2)=B/2 while satisfying the conditions required of Ω. Let us denote K(m,n) as the number of signals that pass through (m,n) in this quantized version of Ω0. Then under a uniform probability distribution, the probability that f(t) passes through (m+1,n(m+1)) given that it passes through (m,n(m)) is
$$ {\begin{aligned} Pr((m\,+\,1,n(m\,+\,1))|(m,n(m))) = \frac{K(m+1,n(m+1))}{\sum_{k=n(m)}^{N} K(m+1,k)}. \end{aligned}} $$
(13)
Note that the expression (13) reflects that f(t) must be nondecreasing. To be able to generate signals in Ω0 randomly, we need to determine K(m,n) for all m=1,…,M−1 and all n. This is done working from m=M−1 (corresponding to one step shy of t=T/2) backwards to m=0 as follows:
- 1.
K(M−1,n)=1 for all n=0,1,...,N.
- 2.
For n=N−2 to \(n=1,\ K(m,n) = \sum _{k=n}^{N} K(m+1,k).\)
The validity of step 1 stems from the fact that all f(t) satisfy f(T/2)=B/2, so that no matter what the value of n(M−1), there is only one possible choice for n(M). The validity of step 2 is seen by noting that if f(t) passes through (m,n), then the frequency index at time index m+1 must lie in {n,n+1,…,N}, so that the total number of signals passing through (m,n) is the sum of the total number of signals passing through each of (m+1,n),(m+1,n+1), …, (m+1,N). Note also that the total number of quantized f(t) that can be generated this way is
$$ K_{0} = \sum_{n=0}^{N} K(1,n). $$
(14)
We will demonstrate this process of computing the number of frequency paths K(m,n) going through (tm,f(tm)) with an example where M=6 and N=5. The final results of this example are shown in Fig. 6. Due to the antisymmetric nature of the frequency function, it is enough to generate only the first half of the frequency function. Each cell shows the values of (m,n) above the value K(m,n). Hence, for this example, the frequency starts at −B/2 at time −T/2 denoted by location (t0,f0) and ends at frequency 0 at time 0 denoted by location (t5,f4). To compute the number of frequency paths through each (tm,fn) locations, we start from the destination (t5,f4). As given in step 1, the number of paths from time t4 is K(4,n)=1 for n=0,1,2,3,4, which we denote in red font in locations (t4,f0) to (t4,f4) in the fourth column of the Fig. 6. The number of frequency paths from the time points t3 to t0 are computed as in step 2. For example, the number of frequency paths through (t3,f3) is two to one is (t3,f3)→(t4,f3)→(t5,f4), second is (t3,f3)→(t4,f4)→(t5,f4). We compute this number from step 2 as K(3,3) =K(4,3)+K(4,4)= 2. By following this procedure, we can compute the number of possible frequency paths for all (tm,fn) to (t5,f4) is the sum of frequency paths from (tm+1,fn),(tm+1,fn+1)…(tm+1,f4). At time t0, the only possible frequency location is f0, since it is the start of the frequency function. To compute the number of frequency paths from (t0,f0) to (t4,f5), we add all the frequency paths from (t1,f1),(t1,f2),(t1,f3), and (t1,f4) as
$$ K(0,0) = \sum_{n=0}^{4}K(1,n) = 35+20+10+4+1 = 70, $$
(15)
which is shown in lower left location in Fig. 6.
Once we have created the number of frequency paths table, the next step is to generate a frequency function. The frequency function starts from location (t0,f0). At this location, there are 70 possible frequency functions. Thity-five of those frequency functions go through location (t1,f0), 20 of them go through (t1,f1), 10 of them go through (t1,f2), four of them go through (t1,f3), and one of them goes through (t1,f4). To randomly pick a frequency path with uniform probability distribution, we choose a random integer η from 1 to 70 inclusive with uniform distribution. η will determine the choice of frequency location at time t1 as given below:
$$ f(t_{1})= \left\{\begin{array}{ll} f_{0}, & \text{if}\ 1 \leq \eta \leq 35 \\ f_{1}, & \text{if}\ 36 \leq \eta \leq 55 \\ f_{2}, & \text{if}\ 56 \leq \eta \leq 65 \\ f_{3}, & \text{if}\ 66 \leq \eta \leq 69 \\ f_{4}, & \text{if}\ \eta=70. \end{array}\right. $$
(16)
From the selected frequency location at time t1, we repeat this procedure for randomly selecting a frequency value at t2. For example, say the frequency location at t1 is f2. From this location, 10 frequency paths are possible as shown in the Fig. 6. Now we choose a random integer η from 1 to 10 inclusive and the frequency location at time t2 is
$$ f(t_{2})= \left\{\begin{array}{ll} f_{2}, & \text{if}\ 1 \leq \eta \leq 6 \\ f_{3}, & \text{if}\ 7 \leq \eta \leq 9 \\ f_{4}, & \text{if}\ \eta=10. \end{array}\right. $$
(17)
Note in this case, the frequency values f0 and f1 are not considered at t2 due to the monotonically increasing characteristic of the frequency function. We repeat this procedure by selecting η with uniform probability distribution for the remaining time instances till the frequency function reaches the destination (t5,f4) to get the first half of the frequency function. We obtain the complete frequency function as
$$ \begin{aligned} f(m)= \left\{\begin{array}{ll} f(m), & \text{if}\ 1 \leq m \leq (M+1)/2 \\ -f(M+1-m), & \text{if}\ (M+1)/2+1 \leq m \leq M. \end{array}\right. \end{aligned} $$
(18)
for odd M and
$$ f(m)= \left\{\begin{array}{ll} f(m), & \text{if}\ 1 \leq m \leq M/2 \\ -f(M+1-m), & \text{if}\ M/2+1 \leq m \leq M. \end{array}\right. $$
(19)
for even M.
Using this frequency function, we obtain the associated FM signal as
$$ s(m) \, = \, \exp \left(\, 2\pi j\, \sum_{l=1}^{m} \, f(l) \Delta t \, \right). $$
(20)
Δt is the time interval between the consecutive frequency points. The characteristics of the signal s may not be suitable for radar applications. Hence, we include the signal s in the signal set Ω0 only if it satisfies the AC mainlobe width and bandwidth conditions mentioned in (1.1).
4.3 Experiment parameters and results
Using the procedure from Subsection 4.2, we generated L=106 random signals from the signal set Ω0 with bandwidth B=1000, time width T=0.1, and number of samples M=1001. We find the best randomly picked signal with respect to the cost metric Q(s;β) for different weights β. The magenta curve (solid line with diamond shaped markers) in Fig. 5 corresponds to the cost of the best random signals selected for each weight β. We can observe that the performance of the best random signals are in the same range as most of the parameterized signal sets. But none of these random signals outperform the best low dimensional signal sets for all the considered weights (6D-OPFS signal for β = 0,0.25,0.50,0.75 and Price’s signal for β=1.00). Hence, as we argued in the beginning of this section, with 95% of confidence level, we estimate that 6D-OPFS and Price’s signal outperform a fraction of at least \((1-\frac {3}{10^{6}}) = 99.9997\%\) of the random signals from the signal set Ω0. From the simulation, we can infer that searching for the best radar signal from the 6D-OPFS and Price’s signal models require much less computational resources with small degradation in performance.