### 3.1 Image-similarity features

1. *Spectral Distance Feature*: In hyperspectral image processing, the spectral angle cosine (SAC) [17] is often used to calculate the similarity of two spectral curves. Assuming that *x* and *y* represent the average spectral vector of the target and background in a certain area, respectively, the calculation formula of SAC is:

$$s_{{{\text{SAC}}}} \left( {x,y} \right) = \cos \left\langle {x,y} \right\rangle = \frac{{\left\langle {x,y} \right\rangle }}{{\sqrt {\left\langle {x,x} \right\rangle } \sqrt {\left\langle {y,y} \right\rangle } }}$$

(1)

It can be seen from the above formula that the spectral angle distance is actually the cosine of the generalized angle between the two spectral curve vectors. Obviously, the higher the similarity between the two spectral curves, the greater the cosine value of the included angle, which reflects the better camouflage effect of the target. At the same time, it can also be seen that the size of the spectral angular distance is only a measure of curve similarity. In the high-dimensional space, it only reflects the parallelism of the two eigenvectors. It has nothing to do with the size and can effectively avoid the interference caused by factors such as the sun’s incident angle and brightness. As a distance measurement method, spectral angular distance has strong objectivity and reliability in camouflage evaluation.

2. *Spectral Derivative Feature*: Spectral derivatives can reduce the characteristic interference caused by partial atmospheric transmission and have some superiority in geophysical spectral morphological analysis [18]. The spectral derivative characteristic (SDC) can reflect subtle changes in the spectral curve, mainly in the slope, gradient and other information. This change is able to highlight the absorption characteristics of the material in the characteristic band and better express the essential characteristics of the target and the background. Considering the hyperspectral image \(X = \left\{ {x_{i} } \right\}_{i = 1}^{n}\) with d bands and n pixels, the spectral derivative of the first image element can be defined as:

$$x^{\prime}_{i} = {{{\text{d}}x_{i} } \mathord{\left/ {\vphantom {{{\text{d}}x_{i} } {{\text{d}}\lambda }}} \right. \kern-0pt} {{\text{d}}\lambda }}$$

(2)

For the discretization of the previous test, the differential expression of \(x^{\prime}_{i}\) can be obtained:

$$x^{\prime}_{i} = \left[ {\left| {x_{i,2} - x_{i,1} } \right|,\left| {x_{i,3} - x_{i,2} } \right|, \ldots ,\left| {x_{i,d} - x_{i,d - 1} } \right|} \right]$$

(3)

Similarly, the average spectral derivatives \(x^{\prime}\) and \(y^{\prime}\) correspond to the target and background, respectively, which are used to compute generalized cosine values:

$$s_{{{\text{SDC}}}} \left( {x,y} \right) = \cos \left\langle {x^{\prime},y^{\prime}} \right\rangle = \frac{{\left\langle {x^{\prime},y^{\prime}} \right\rangle }}{{\sqrt {\left\langle {x^{\prime},x^{\prime}} \right\rangle } \sqrt {\left\langle {y^{\prime},y^{\prime}} \right\rangle } }}$$

(4)

The spectral curve variation information, reflected by the spectral derivative feature, is essentially a way to evaluate the camouflage effect from the perspective of the curve shape. Therefore, describing the similarity of the target and background using spectral derivative features is important to enhance the objective of camouflage effect evaluation.

3. *Curve Shape Feature*: Different from the spectral derivative feature, the spectral correlation coefficient (SCC) can be obtained for the “slope” of each point; the spectral vector information can be integrated, and the absolute correlation degree of the spectral curve (ASC) can be used. The degree of similarity between two spectral curves can be determined [16].

Deviation standardization was used to normalize the target spectrum and background spectrum:

$$Y_{i} = \frac{{x_{i} - \min \left( {x_{i} } \right)}}{{\min \left( {x_{i} } \right) - \max \left( {y_{i} } \right)}}$$

(5)

For the average spectral derivative of the background and target and the sum of the normalized spectral data, the spectral correlation coefficient is calculated as:

$$\xi \left( j \right) = {1 \mathord{\left/ {\vphantom {1 {\left[ {1 + \left| {x^{\prime}\left( {j + 1} \right) - y^{\prime}\left( {j + 1} \right)} \right|} \right]}}} \right. \kern-0pt} {\left[ {1 + \left| {x^{\prime}\left( {j + 1} \right) - y^{\prime}\left( {j + 1} \right)} \right|} \right]}}$$

(6)

where \(j = 1,2, \ldots ,d - 1\). Combining the information of the optical correlation coefficient, we obtain the average, namely the absolute correlation of the spectral curve, which can be expressed as:

$$s_{{{\text{ASC}}}} = \frac{1}{d - 1}\sum_{j = 1}^{d - 1} {\xi \left( j \right)}$$

(7)

The absolute correlation calculated by the above formula reflects the spectral line shape characteristics of the two spectral curves and is comparable. It can be seen that the greater the value of the absolute correlation, the higher the spectral similarity between the target and the background; otherwise, the smaller the absolute correlation value, the worse the similarity, indicating a poor camouflage effect.

4. *Spatial Texture Feature*: Texture is a visual feature that reflects grayscale statistics and distribution characteristics and spatial arrangement structure, which is different from image features such as grayscale and color. For hyperspectral data, the texture structure reflects the spectral difference between different spatial pixels and reflects the spatial transformation law of the spectrum. Therefore, using spectral data to express the spatial texture structure and establishing a camouflage evaluation index based on spatial texture features is of great significance for distinguishing camouflage from background.

Principal component analysis is the most commonly used feature extraction algorithm, and the obtained first principal component reflects the maximum amount of information of the spectral data. Therefore, the spectral data can be converted into a grayscale image through the dimensionality reduction method, and then the texture extraction algorithm can be used to obtain the largest feature of the grayscale image.

The wavelet transform [19] is the most commonly used feature extraction algorithm in the process of obtaining texture structures. Multilayer wavelet decomposition is used for the first principal component, and four wavelet components are used as variables to extract texture features:

$$\left\{ {\begin{array}{*{20}l} {T_{1} = {{\left( {cH_{m} + cV_{m} + cD_{m} } \right)} \mathord{\left/ {\vphantom {{\left( {cH_{m} + cV_{m} + cD_{m} } \right)} {cA_{m} }}} \right. \kern-0pt} {cA_{m} }}} \hfill \\ {T_{2} = {{cH_{m} } \mathord{\left/ {\vphantom {{cH_{m} } {cV_{m} }}} \right. \kern-0pt} {cV_{m} }}} \hfill \\ \end{array} } \right.$$

(8)

where \(cH_{m}\),\(cV_{m}\) , and \(cD_{m}\) are the horizontal, vertical, and diagonal wavelet components, respectively; \(T_{1}\) is the ratio of high-frequency components and low-frequency components after wavelet transformation, which is used to express the spatial frequency characteristics of image texture; \(T_{2}\) is the ratio of horizontal and vertical components after wavelet transformation, which is used to express the differentiation direction characteristics of image texture. Then, the spatial texture feature (spatial texture feature, STF) can be expressed as the weighted sum of, namely:

$$T = K\left( {T_{1} ,T_{2} } \right)$$

(9)

The difference (\(T_{x} ,T_{y}\)) between the spatial texture characteristics of the target and the background is used as the similarity index:

$$s_{{{\text{STF}}}} = {1 \mathord{\left/ {\vphantom {1 {\left( {1 + \left| {T_{x} - T_{y} } \right|} \right)}}} \right. \kern-0pt} {\left( {1 + \left| {T_{x} - T_{y} } \right|} \right)}}$$

(10)

### 3.2 A comprehensive evaluation index system

Different similarity characteristics reflect the camouflage characteristics of the target relative to the background from different angles. On this basis, it is necessary to integrate the characteristics of various indicators to find a comprehensive measurement method that can better reflect the camouflage characteristics of the target. The difficulty of comprehensive evaluation lies not in the “cognitive uncertainty” in fuzzy mathematics but in how to obtain the inherent meaning of the camouflage characteristics from the indicators that have been obtained. Gray system theory has strong superiority in solving this problem.

The comprehensive evaluation is based on the multifeature description of the background and camouflage images, and the ultimate goal is to give a comprehensive decision measure result of the camouflage effect. Among them, based on improving the weight construction of the Delphi method and the gray clustering comprehensive evaluation method based on the whitening weight function is two important links, in essence through the simulation of the expert decision-making process, combined with the gray evaluation clustering characteristics and the more intuitive and detailed comprehensive camouflage effect evaluation index, that explain the “advantages and disadvantages” of the camouflage effect.

1. *Linear Normalization*: To achieve the consistency of each index in the multi-index system, all indices are required to have the same dimension, order of magnitude and unit. Therefore, different index data need to be standardized.

Consider three normalization methods: normalization, scale transformation and range transformation. Because the scale transformation can retain the original feature difference to the greatest extent, it also has different data requirements (trending to “0,” toward “1” and toward the center). Different nondimensional processing methods are used, and the nondimensional processing method of linear proportional transformation is used here.

Linear scale transformation mainly includes the lower limit measure, central measure and upper limit measures; different measurement methods reflect the different application scope, and the upper limit measurement method does not change the original index discrimination direction, for the camouflage effect evaluation index, has the tendency of “1” characteristics, to reflect the difference as far as possible between target and background. The upper-limit measure method is used here.

$$\delta_{ki} = {{d_{k}^{i} } \mathord{\left/ {\vphantom {{d_{k}^{i} } {\mathop {\max }\limits_{k} }}} \right. \kern-0pt} {\mathop {\max }\limits_{k} }}d_{k}^{i}$$

(11)

2. *Delphi Tips for Weight Construction*: For a multi-index evaluation system, the selection of weights is of great significance. Different indices can easily lead to deviations in evaluation results and sometimes interfere with decision-makers’ decision-making and causes them to obtain incorrect decision-making results. Delphi is a more commonly used weight construction method, but its essence is a survey method for collecting expert opinions. It involves the processing and analysis of expert opinions. It is affected by the number of experts and the degree of expertise of the experts. It is easy to cause inconsistencies in the weighting results. This paper uses big data to simulate the expert decision-making process and obtains the best evaluation weights through comparison and screening [20].

If the initial weight of each index is \(w^{*} = \left\{ {w_{i}^{*} } \right\}_{1 \times n}\), the weight vector construction result of each index can be normalized:

$$\begin{aligned} w & = \left\{ {\frac{{w_{1}^{ * } }}{{\sum_{i = 1}^{n} {w_{i}^{ * } } }},\frac{{w_{2}^{ * } }}{{\sum_{i = 1}^{n} {w_{i}^{ * } } }}, \ldots ,\frac{{w_{n}^{ * } }}{{\sum_{i = 1}^{n} {w_{i}^{ * } } }}} \right\} \\ \, & = \left\{ {w_{1} ,w_{2} , \ldots ,w_{n} } \right\} \\ \end{aligned}$$

(12)

3. *Comprehensive Evaluation of Gray Clustering*: The so-called “gray” refers to the knowledge and unknown of the judgment process. In the actual camouflage evaluation process, the characteristic description of things is often known, and how to evaluate or evaluate the results is often unknown. Gray clustering [21] uses known observation indicators to classify unknown results through custom categories, in which “category” represents the “good or bad” difference of the expert decision layer, which is actually a hierarchical discrimination method with index direction. Gray clustering mainly includes the following steps:

*Step 1*: Establish an indicator set

Suppose that there are n objects and m evaluation indices, and establish an evaluation index matrix \(U = \left[ {u_{1} ,u_{2} , \cdots ,u_{n} } \right]^{T}\) where is the eigenvector of the *i*’s object.

*Step 2*: Normalization of the indicator characteristics

The directly obtained raw data are inconsistent in terms of magnitude and unit, making it difficult to equally evaluate target camouflage performance; therefore, the raw data should be normalized before using each metric. To better unify the metrics, the central effect measure normalization method of the linear scale transformation is used here.

*Step 3*: Construct the whitening weight function and set the evaluation level.

The process of evaluating things from fuzzy to clear corresponds to the gray evaluation system, which is equivalent to the process of grayscale changing from “gray” to “white,” in which the whitening weight function plays a key role.

The evaluation grade is delimited from best to inferior according to the evaluation requirements, and s gray classes must be determined according to the number of evaluation grades. Based on this, you can divide the range \(\left[ {a_{1} ,a_{s + 1} } \right]\) for an indicator into subsets:

$$\begin{aligned} & \left[ {a_{1} ,a_{2} } \right],\left[ {a_{2} ,a_{3} } \right], \ldots ,\left[ {a_{k - 1} ,a_{k} } \right], \ldots \\ & \left[ {a_{s - 1} ,a_{s} } \right],\left[ {a_{s} ,a_{s + 1} } \right] \\ \end{aligned}$$

(13)

Determine the triangulated whitening function of *k*’s class as:

$$f_{j}^{k} \left( \cdot \right)\;\;\left( {j = 1,2, \ldots ,m;k = 1,2, \ldots ,s} \right)$$

(14)

Assuming that the observed value of the *k*’s index is, the membership degree of the category can be calculated by the following formula:

$$f_{j}^{k} \left( x \right) = \left\{ {\begin{array}{*{20}c} {\frac{{x - a_{k - 1} }}{{\lambda_{k} - a_{k - 1} }},\;\;\;\;\;\;\;x \in \left[ {a_{k - 1} ,\lambda_{k} } \right)} \\ {\frac{{a_{k} - x}}{{a_{k} - \lambda_{k} }},\;\;\;\;\;\;\;\;\;x \in \left[ {\lambda_{k} ,a_{k} } \right]\;} \\ \end{array} } \right.$$

(15)

*Step 4*: Constructing the index weights

Weights were determined by using a modified Delphi method, using multiple sets of weight data experiments instead of expert decisions, to obtain reliable weight vectors \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{m} } \right)\).

*Step 5*: Calculate the samples’ decision coefficient matrix

The decision coefficient for judging whether object *i* belongs to category *k* can be expressed as:

$$\sigma_{i}^{k} = \sum_{j = 1}^{m} {f_{j}^{k} \left( {x_{ij} } \right)w_{j} }$$

(16)

The sample decision coefficient matrix can be obtained as follows:

$$\Sigma = \left( {\sigma_{i}^{k} } \right) = \left[ {\begin{array}{*{20}c} {\sigma_{1}^{1} } & {\sigma_{1}^{2} } & \cdots & {\sigma_{1}^{s} } \\ {\sigma_{2}^{1} } & {\sigma_{2}^{2} } & \cdots & {\sigma_{2}^{s} } \\ \vdots & \vdots & {} & \vdots \\ {\sigma_{n}^{1} } & {\sigma_{n}^{2} } & \cdots & {\sigma_{n}^{s} } \\ \end{array} } \right]$$

(17)

*Step 6*: Determine the gray class grade.

The matrix \(\sum\) is normalized by rows, and the normalized unit decision coefficient matrix can be obtained:

$$\delta_{i}^{k} = \frac{{\sigma_{i}^{k} }}{{\sum_{k = 1}^{s} {\sigma_{i}^{k} } }}$$

(18)

If \(\max_{1 \le k \le s} \;\left\{ {\delta_{i}^{k} } \right\} = \delta_{i}^{{k^{*} }}\), \(k^{*}\) is the category to which *i*’s object belongs (i.e., the evaluation level).

*Step 7*: Determine the comprehensive decision-making degree

After determining the evaluation rating of all the objects, it is still impossible to compare with the objects of the same level, so the comprehensive decision degree needs to be introduced. Assuming there is the decision coefficient \(\eta\), it is necessary to combine the adjustment coefficients of the different decision classes.

$$\left\{ {\begin{array}{*{20}c} {\eta_{1} = \left( {s,s - 1,s - 2, \ldots ,1} \right)\;\;\;\;} \\ {\eta_{2} = \left( {s - 1,s - 1,s - 2, \ldots ,2} \right)\;} \\ {\eta_{3} = \left( {s - 2,s - 1,s - 1, \ldots ,3} \right)} \\ \vdots \\ {\eta_{4} = \left( {1,2,3, \ldots ,s - 1,s} \right)\;\;\;\;\;} \\ \end{array} } \right.$$

(19)

\(\eta_{1} ,\eta_{2} , \ldots ,\eta_{s}\) are then called the adjustment coefficients of class 1, class 2, …, class *s*, respectively.

*Step 8*: Comprehensive evaluation results

Define comprehensive decision measures for objects 1 and 2 of the same hierarchy:

$$\left\{ {\begin{array}{*{20}c} {\varepsilon_{1} = \eta_{k} \delta_{1}^{T} } \\ {\varepsilon_{2} = \eta_{k} \delta_{2}^{T} } \\ \end{array} } \right.$$

(20)

If

$$\left\{ \begin{gathered} \mathop {\max }\limits_{1 \le k \le s} \;\left\{ {\delta_{{i_{1} }}^{k} } \right\} = \delta_{{i_{1} }}^{{k^{*} }} \hfill \\ \mathop {\max }\limits_{1 \le k \le s} \;\left\{ {\delta_{{i_{2} }}^{k} } \right\} = \delta_{{i_{2} }}^{{k^{*} }} \hfill \\ \varepsilon_{1} > \varepsilon_{2} \hfill \\ \end{gathered} \right.$$

(21)

Then, it is judged that Target 1 is better than Target 2.