Skip to content
• Research
• Open Access

# A refined affine approximation method of multiplication for range analysis in word-length optimization

EURASIP Journal on Advances in Signal Processing20142014:36

https://doi.org/10.1186/1687-6180-2014-36

• Received: 8 June 2013
• Accepted: 10 March 2014
• Published:

## Abstract

Affine arithmetic (AA) is widely used in range analysis in word-length optimization of hardware designs. To reduce the uncertainty in the AA and achieve efficient and accurate range analysis of multiplication, this paper presents a novel refined affine approximation method, Approximation Affine based on Space Extreme Estimation (AASEE). The affine form of multiplication is divided into two parts. The first part is the approximate affine form of the operation. In the second part, the equivalent affine form of the estimated range of the difference, which is introduced by the approximation, is represented by an extra noise symbol. In AASEE, it is proven that the proposed approximate affine form is the closest to the result of multiplication based on linear geometry. The proposed equivalent affine form of AASEE is more accurate since the extreme value theory of multivariable functions is used to minimize the difference between the result of multiplication and the approximate affine form. The computational complexity of AASEE is the same as that of trivial range estimation (AATRE) and lower than that of Chebyshev approximation (AACHA). The proposed affine form of multiplication is demonstrated with polynomial approximation, B-splines, and multivariate polynomial functions. In experiments, the average of the ranges derived by AASEE is 59% and 89% of that by AATRE and AACHA, respectively. The integer bits derived by AASEE are 2 and 1 b less than that by AATRE and AACHA at most, respectively.

## Keywords

• Word-length optimization
• Range analysis
• Affine approximation
• Accuracy
• Uncertainty analysis

## 1 Introduction

As a method of representing real numbers, floating point can support a wide dynamic range and high precision of values. It has been thus commonly used in signal processing, such as image processing, speech processing, and digital signals processing, to represent signals. When these applications are implemented on hardware for high speed and stability, the signals need to be represented in fixed point to optimize the performance of area, power, and speed of the hardware. Hence, the values in floating-point need to be converted to those in fixed point. This process is named as word-length optimization. Its goal is to achieve optimal system performance while satisfying the specification on the system output precision. Word-length optimization involves range analysis and precision analysis. The former one is to find the minimum word length of the integer part of the value, while the latter one focuses on the optimization of the fractional part of the word length.

Word-length optimization has been proven to be an NP-hard problem . It can be usually classified into dynamic analysis  and static analysis . By analyzing a large set of stimuli signals, dynamic analysis is applicable to all types of systems. However, it will take long time on simulation to provide sufficient confidence. Also, the precision for the signals without simulation cannot be guaranteed. Comparatively, the static analysis is an automated and efficient word-length optimization method and more applicable to large designs when compared to dynamic analysis. The static analysis mainly uses the characteristics of the input signals to estimate the word length conservatively, which can result in overestimation  to some extent. As a part of word-length optimization, the range analysis can also been classified in the same way.

Affine arithmetic (AA)  is often used for range analysis in static analysis. In AA, every signal must be represented in an affine form, which is a first-degree polynomial. As AA tracks the correlations among range intervals of signals, it can provide more accurate word-length range. This makes it suitable for range analysis of the result of linear operations. It is noted that besides linear operations, nonlinear operations, such as multiplication, are also involved in hardware operations, typically in linear time invariant (LTI) systems. AA cannot provide an exact affine form for nonlinear operations. To solve this problem, Stolfi and de Figueiredo  proposed affine approximation methods for multiplication, which include trivial range estimation (AATRE) and Chebyshev approximation (AACHA). AATRE is efficient for computation, but the range produced by it can be four times of real range at most. The accumulation of the uncertainty of all signals in the computational chain may result in an error explosion, which is unacceptable in application. Such overestimation obviously cannot satisfy the accuracy requirement of the system, which limits the application of AATRE in large systems. The uncertainty of AACHA is less than AATRE, however, it is too complex to be used in large systems. Since LTI operations are accurately covered by AA, the proposed method is applied in the field of the range analysis of word-length optimization in this paper.

A novel affine approximation method, Approximation Affine based on Space Extreme Estimation (AASEE), is proposed to reduce the uncertainty of multiplication and achieve an accurate and efficient range analysis of multiplication in this paper. To analyze the uncertainty conveniently, we use two parts to divide the different parts of all the approximation methods for multiplication, which include AATRE, AACHA, and AASEE. The first part is named as approximate affine form, which is approximated to the nonlinear operation. The second part is named as equivalent affine form, which is the equivalent affine form of the estimated range of the difference between the result of multiplication and the approximate affine form. The more accurate the two parts are, the more accurate the approximation method is. Based on linear geometry , it is proven that the proposed approximate affine form is the closest to the result of multiplication. To derive the equivalent affine form, we use the extreme value theory of multivariable functions  to estimate the upper and lower bounds of the difference in space, and the difference is introduced by the approximation of the first part. The uncertainty of the proposed method is minimized. The accuracy of the resulting affine form by AASEE is higher than that by AATRE and averagely higher than that by AACHA. Meanwhile, the computational complexity of AASEE is equivalent to that of AATRE and lower than that of AACHA.

The rest of this paper is organized as follows. Background of range analysis for multiplication is presented in Section 2. Section 3 presents the method of derivation of the two parts for multiplication. The refined affine form of multiplication, AASEE, is presented in next section. In Section 5, we compare the computational complexity and the accuracy among AASEE to AATRE and AACHA. The case studies and experimental results are demonstrated in Section 6. Section 7 concludes the paper.

## 2 Background

### 2.1 Related work

Interval arithmetic (IA) and affine arithmetic (AA) have been widely used in range analysis in word-length optimization.

IA  is a range arithmetic theory which is firstly presented by Moore in 1962. Cmar  employs it for range analysis of digital signal processing (DSP) systems. Carreras  presents a method based on IA. To reduce the oversized word length, the method provides the probability density functions that can be used when some truncation must be performed due to constraints in the specification. IA is not suitable for most real-world applications, since it could lead to drastic overestimation of the true range.

AA  is proposed to overcome the weakness of IA by Stolfi in 1993. In [8, 9], Fang uses AA to analyze word-length optimization. Both range and precision are represented by the same affine form, which limits the optimization. Pu and Ha  also use AA for word-length optimization. Simultaneously, they use two different affine forms for range analysis and precision analysis, respectively, and achieve more refined result of word-length optimization. Similarly, Lee et al.  develop an automatic optimization approach, which is called MiniBit, to produce accuracy-guaranteed solutions, and area is minimized while meeting an error constraint. Osborne  uses both IA and AA for range analysis for different situations. Computation using either of the two methods in the design is time-consuming. The problem of overestimation is serious due to the approximation of the nonlinear operations.

Since AA cannot be used in the systems with infinite number of loops, an improved approach, quantized AA (QAA), has been proposed in  for linear time-invariant systems with feedback loops. This method can provide fast and tight estimation of the evolution of large sets of numerical inputs, using only an affine-based simulation, but it does not provide the exact bounds.

AATRE  is adopted for multiplication in most of the works for the low computational complexity. But the uncertainty of the range by AATRE is very large. To adjust the trade-off between the accuracy of approximation and computational complexity, Zhang  introduces a new parameter N in the N-level simplified affine approximation (N-SAA). This method is faster than AACHA and more accurate than AATRE, but it is more complex than AATRE. Furthermore, it is troublesome to choose a suitable N. A method of range analysis is proposed by Pang . This method combines methods of IA, AATRE, and arithmetic transform (AT); and the result of the method is more accurate than AATRE, while the CPU implementation time is longer than AATRE. To deal with applications from the scientific computing domain, Kinsman [17, 18] uses the computational methods based on Satisfiability Modulo Theory. Search efficiency of this method is improved leading to tighter bounds and thus smaller word length.

For all the existing methods, the accuracy of approximation is improved at the expense of the computational complexity. This paper presents an affine approximation method for multiplication, which achieves better trade-off between accuracy and computational complexity.

### 2.2 Range analysis

Range analysis involves studying the data range of every signal and minimizing the integer word lengths for signals on the premise that the signals in the design have enough bits to accommodate this range. The range of signal x is represented by x= [xmin, xmax], where the two real numbers, xmin and xmax, denote the lower and upper bounds of x, respectively. The required integer part of the word length for signal x, which is represented as IWL x , can be derived by:
$\begin{array}{ll}\hfill {\text{IWL}}_{x}& =\left\{\begin{array}{ll}⌈{log}_{2}\left(|\mathit{\text{x}}{|}_{max}\right)⌉+\alpha ,& \phantom{\rule{2em}{0ex}}|\mathit{\text{x}}{|}_{max}\ge 1\\ 1,& \phantom{\rule{2em}{0ex}}|\mathit{\text{x}}{|}_{max}<1\end{array}\right\\phantom{\rule{2em}{0ex}}.\\ \hfill \text{where}\phantom{\rule{1em}{0ex}}|\mathit{\text{x}}{|}_{max}& =max\left(|{\mathit{\text{x}}}_{min}|,|{\mathit{\text{x}}}_{max}|\right)\phantom{\rule{2.7em}{0ex}}\text{and}\\ \hfill \alpha & =\left\{\begin{array}{l}1,\phantom{\rule{2em}{0ex}}\text{mod}\left({log}_{2}\left({\mathit{\text{x}}}_{max}\right),1\right)\ne 0\\ 2,\phantom{\rule{2em}{0ex}}\text{mod}\left({log}_{2}\left({\mathit{\text{x}}}_{max}\right),1\right)=0\end{array}\right\.\end{array}$
(1)

In (1), all the signals in the design are assumed to be expressed as signed numbers, and the sign bit is taken into account in IWL x . According to (1), once the range of a signal is decided, the integer part of word length of the signal can be derived.

### 2.3 Affine arithmetic

AA is widely applied for range analysis. In AA, an uncertain signal x is represented by an affine form as a first-degree polynomial :
$\stackrel{̂}{x}={x}_{0}+{x}_{1}{\epsilon }_{1}+{x}_{2}{\epsilon }_{2}+\cdots +{x}_{n}{\epsilon }_{n},\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\text{where}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\epsilon }_{i}=\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}-1,1\right].$
(2)

For the signal x, x0 is the central value, and ε i is the i th noise symbol. ε i denotes an independent uncertainty source that contributes to the total uncertainty of the signal x, and x i is its coefficient.

The upper and lower bounds for the range of x can be represented as
${x}_{max}={x}_{0}\phantom{\rule{0.3em}{0ex}}+\sum _{i=1}^{n}|{x}_{i}|,\phantom{\rule{2em}{0ex}}{x}_{min}={x}_{0}-\sum _{i=1}^{n}|{x}_{i}|.$
(3)
With xmin and xmax, the input interval $\stackrel{̄}{x}=\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}{x}_{min},{x}_{max}\right]$ can be converted into an equivalent affine form as (4), using only one independent noise symbol.
$\begin{array}{ll}\hfill \stackrel{̂}{x}& ={x}_{0}+{x}_{1}{\epsilon }_{1},\\ \hfill \text{with}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{x}_{0}& =\frac{{x}_{max}+{x}_{min}}{2},\phantom{\rule{1em}{0ex}}{x}_{1}=\frac{{x}_{max}-{x}_{min}}{2}.\end{array}$
(4)

AA can keep correlations among the signals of the computational chain by contributing the sample noise symbol ε i to each signal .

For multiplication, AATRE and AACHA are typical approximation methods.

The affine form of AATRE is
$\begin{array}{l}\stackrel{̂}{x}ŷ={x}_{0}{y}_{0}+\sum _{i=1}^{n}\left({x}_{0}{y}_{i}+{y}_{0}{x}_{i}\right){\epsilon }_{i}+\sum _{i=1}^{n}|{x}_{i}|\sum _{i=1}^{n}|{y}_{i}|{\epsilon }_{n+1}.\end{array}$
(5)

Suppose M1= max(n1,n2), in which n1 and n2 denote the number of the noise symbol, whose coefficient is nonzero, of $\stackrel{̂}{x}$ and , respectively. The computational complexity of AATRE is O(M1).

AACHA provides a better approximation result, but it is more complex. The affine form of AACHA is
$\phantom{\rule{-15.0pt}{0ex}}\stackrel{̂}{x}ŷ={x}_{0}{y}_{0}+\sum _{i=1}^{n}\left({x}_{0}{y}_{i}+{y}_{0}{x}_{i}\right){\epsilon }_{i}+\frac{a+b}{2}+\frac{b-a}{2}{\epsilon }_{n+1},$
(6)

where a and b denote the minimum and the maximum of the range of $\left(\sum _{i=1}^{n}{x}_{i}{\epsilon }_{i}\right)\left(\sum _{i=1}^{n}{y}_{i}{\epsilon }_{i}\right)$. Suppose M2 = n1 + n2. The complexity of computing the both extremal values, a and b, is O(M2 logM2). As M1 ≤ M2, the computational complexity of AATRE is lower than that of AACHA .

### 2.4 Extreme value theory

The proposed approximation is based on the extreme value theory of multivariable functions .

According to the extreme value theory of multivariable functions, the Hessian matrix of the function, H, and Jacobian matrix of the function, J, can be used to find the local maxima and the local minima. Hessian matrix of function f(x1,x2, …, x n ) is
$\begin{array}{l}\mathbit{H}=\left[\begin{array}{cccc}\frac{{\partial }^{2}f}{\partial {x}_{1}^{2}}& \frac{{\partial }^{2}f}{\partial {x}_{1}{x}_{2}}& \cdots & \frac{{\partial }^{2}f}{\partial {x}_{1}{x}_{n}}\\ \frac{{\partial }^{2}f}{\partial {x}_{2}{x}_{1}}& \frac{{\partial }^{2}f}{\partial {x}_{2}^{2}}& \cdots & \frac{{\partial }^{2}f}{\partial {x}_{2}{x}_{n}}\\ \cdots & \cdots & \cdots & \cdots \\ \cdots & \cdots & \cdots & \cdots \\ \frac{{\partial }^{2}f}{\partial {x}_{n}{x}_{1}}& \frac{{\partial }^{2}f}{\partial {x}_{n}{x}_{2}}& \cdots & \frac{{\partial }^{2}f}{\partial {x}_{n}^{2}}\end{array}\right].\end{array}$
(7)

Here we use ${\mathbit{H}}_{{f}^{\alpha }}$ to represent H at a point ${f}^{\alpha }=\left({x}_{1}^{\alpha },{x}_{2}^{\alpha },\cdots \phantom{\rule{0.3em}{0ex}},{x}_{n}^{\alpha }\right)$ and ${\mathbit{J}}_{{f}^{\alpha }}$ to represent J at a point f α .

A stationary point of f, f α , is a point where ${\mathbit{J}}_{{f}^{\alpha }}=0$. ${\mathbit{H}}_{{f}^{\alpha }}$ is indefinite when ${\mathbit{H}}_{{f}^{\alpha }}$ is neither positive semidefinite nor negative semidefinite. If ${\mathbit{H}}_{{f}^{\alpha }}$ is positive definite, then f α is a local minimum point. If ${\mathbit{H}}_{{f}^{\alpha }}$ is negative definite, then f α is a local maximum point. If ${\mathbit{H}}_{{f}^{\alpha }}$ is indefinite, then f α is neither a local maximum nor a local minimum. It is a saddle point. Otherwise, f α is not utilized in this paper.

The principal minor determinants are used to determine if a matrix is positive or negative definite or semidefinite.

It is necessary and sufficient for a positive semidefinite matrix that all the principal minor determinants of the matrix are nonnegative real numbers.

It is necessary and sufficient for a negative semidefinite matrix that all the odd order principal minor determinants of the matrix are non-positive real numbers and all the even order principal minor determinants of the matrix are nonnegative real numbers.

## 3 Derivation of the two parts for multiplication

A generic nonlinear operation $z←f\left(\stackrel{̂}{x},ŷ\right)$ proposed in  can be described by (8):
$\begin{array}{ll}z& =f\left({x}_{0}+{x}_{1}{\epsilon }_{1}+\cdots +{x}_{n}{\epsilon }_{n},{y}_{0}+{y}_{1}{\epsilon }_{1}+\cdots +{y}_{n}{\epsilon }_{n}\right)\\ ={f}^{\ast }\left({\epsilon }_{1},\dots ,{\epsilon }_{n}\right).\end{array}$
(8)
Since the operation f is nonlinear, f(ε1, …, ε n ) cannot be expressed exactly as an affine combination of the noise symbols, ε i . Under this case, an approximate affine form of the operation, which is represented as f z , must be used to approximate f(ε1, …, ε n ). The difference introduced by this approximation, d f  = f-f z , can be expressed by an equivalent affine form of the estimated range of the difference, which is represented as $\stackrel{̂}{d}$. Hence, the affine form of z can be expressed as
$\stackrel{̂}{z}={f}_{z}+\stackrel{̂}{d}.$
(9)
In (9), f z is a first-degree function of ε i and can be expressed as (10)
${f}_{z}\left({\epsilon }_{1},\cdots \phantom{\rule{0.3em}{0ex}},{\epsilon }_{n}\right)={z}_{0}+\sum _{i=1}^{n}{z}_{i}{\epsilon }_{i}.$
(10)
The computational complexity of computing the true range of d f is very high in a practical application. The estimated range of d f is utilized instead of the true range. Suppose dmax and dmin denote the upper and lower bounds of the estimated range of d f , respectively. According to (4), the $\stackrel{̂}{d}$ can be expressed as (11)
$\stackrel{̂}{d}={z}^{\prime }+{z}_{n+1}{\epsilon }_{n+1}=\frac{{d}_{max}+{d}_{min}}{2}+\frac{{d}_{max}-{d}_{min}}{2}{\epsilon }_{n+1}.$
(11)
With (10) and (11), the affine form of z can be represented as
$\stackrel{̂}{z}={f}_{z}+\stackrel{̂}{d}={z}_{0}+\sum _{i=1}^{n}{z}_{i}{\epsilon }_{i}+{z}^{\prime }+{z}_{n+1}{\epsilon }_{n+1}.$
(12)
For multiplication, z can be expressed as
$z={x}_{0}{y}_{0}+{x}_{0}\sum _{i=1}^{n}{y}_{i}{\epsilon }_{i}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}{y}_{0}\sum _{i=1}^{n}{x}_{i}{\epsilon }_{i}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\left(\sum _{i=1}^{n}{x}_{i}{\epsilon }_{i}\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left(\sum _{i=1}^{n}{y}_{i}{\epsilon }_{i}\right)\phantom{\rule{0.3em}{0ex}}.$
(13)

The first three items of (13) form an affine form and the last term is a quadratic term. Its affine form can also be represented as (12).

According to the definition of f z in (10) and $\stackrel{̂}{d}$ in (11), AATRE and AACHA can also be represented by f z and $\stackrel{̂}{d}$. For AATRE in (5), the f z and $\stackrel{̂}{d}$ are defined as
${f}_{z}={x}_{0}{y}_{0}+\sum _{i=1}^{n}\left({x}_{0}{y}_{i}+{y}_{0}{x}_{i}\right){\epsilon }_{i},$
(14)
$\stackrel{̂}{d}=\sum _{i=1}^{n}|{x}_{i}|\sum _{i=1}^{n}|{y}_{i}|{\epsilon }_{n+1}.$
(15)
For AACHA in (6), the f z and $\stackrel{̂}{d}$ are defined as
${f}_{z}={x}_{0}{y}_{0}+\sum _{i=1}^{n}\left({x}_{0}{y}_{i}+{y}_{0}{x}_{i}\right){\epsilon }_{i},$
(16)
$\stackrel{̂}{d}=\frac{a+b}{2}+\frac{b-a}{2}{\epsilon }_{n+1}.$
(17)

In the existing affine approximation methods of AATRE and AACHA, dmax and dmin are estimated in the XY plane. In these methods, the same noise symbol of different variables is considered to be independent. Hence, the range of $\stackrel{̂}{d}$ is much larger than that of d f . The difference between $\stackrel{̂}{d}$ and d f will propagate to $\stackrel{̂}{z}$ and result in uncertainty.

To describe the multiplication accurately, we use ε i as the input arguments and estimate the range of z in the (n+1)-dimensional space E n+1 . The (n + 1)-dimensional space E n+1 is labeled as (ε1, …, ε n , z). In space E n + 1 , a first-degree polynomial function can be expressed as a (n + 1)-dimensional hyperplane and a nonlinear polynomial function denotes a (n + 1)-dimensional space curved surface. The approximate affine form in (10) denotes a (n + 1)-dimensional hyperplane in E n+1 . Each hyperplane in E n+1 can be viewed as a parallel translation of a tangent hyperplane at a certain point of (n + 1)-dimensional space curved surface. Hence, all possible approximate affine forms for z can be regarded as the (n + 1)-dimensional tangent hyperplanes at all points of (n + 1)-dimensional space curved surface in E n+1 . The translation amount is taken into account in d f , which is approximated by $\stackrel{̂}{d}$. In space E n+1 , d f can be viewed as the function of the distance between the points of space curved surface and the tangent hyperplane.

Figure 1 shows an example of $\stackrel{̂}{x}=1+{\epsilon }_{1}+5{\epsilon }_{2}$ and $ŷ=3-6{\epsilon }_{1}+{\epsilon }_{2}$. The space is labeled as (ε1, ε2, z). The red mesh surface represents the function $z=\stackrel{̂}{x}ŷ=\left(1+{\epsilon }_{1}+5{\epsilon }_{2}\right)\left(3-6{\epsilon }_{1}+{\epsilon }_{2}\right)$. The blue plane represents the tangent plane f z , z = 3 - 3ε1 + 16ε2, at the point z α  = (0, 0, 3). All the possible approximate affine forms for z are the tangent planes of all the points. d f is a function of distance between z and f z . Figure 1 Example of multiplication in ( n + 1)-dimensional space E n+1 .
Here we use ${f}_{{z}^{\alpha }}$ in (18) to represent the tangent hyperplane at the point ${z}^{\alpha }=\left({\epsilon }_{1}^{\alpha },{\epsilon }_{2}^{\alpha },\dots ,{\epsilon }_{n}^{\alpha }\right)$. Then, the possible approximate affine form can be represented as ${f}_{{z}^{\alpha }}$, too.
${f}_{{z}^{\alpha }}={z}^{\alpha }+{z}_{{\epsilon }_{1}}^{\prime }\left({\epsilon }_{1}-{\epsilon }_{1}^{\alpha }\right)+z{\prime }_{{\epsilon }_{2}}\left({\epsilon }_{2}-{\epsilon }_{2}^{\alpha }\right)+\cdots +z{\prime }_{{\epsilon }_{n}}\left({\epsilon }_{n}-{\epsilon }_{n}^{\alpha }\right).$
(18)

In (18), $z{\prime }_{{\epsilon }_{n}}$ are the partial derivatives of z with respect to the variables ε n at the point z α .

With the estimated range of d f , the maximum absolute error of d f can be expressed as
${e}_{a}=max\left(|{d}_{max}|,|{d}_{min}|\right).$
(19)
To reduce the uncertainty, f z must be the most closed to the result of multiplication. Hence, f z is the tangent hyperplane whose maximum absolute error is minimum among that of all the possible affine form ${f}_{{z}^{\alpha }}$, that is,
${e}_{a}\left({f}_{z}\right)=min\left({e}_{a}\left({f}_{{z}^{\alpha }}\right)\right).$
(20)

The geometrical meaning of f z denotes the tangent hyperplane whose maximum absolute error is minimized.

f z is derived by the range of d f , while $\stackrel{̂}{d}$ is the equivalent affine form of d f . It is very complex to compute the true range of d f . With $\stackrel{̂}{d}$ in (11), the uncertainty in AA for nonlinear operations is generated due to the difference between the true range of d f and the estimated range of d f .

It is much tighter and easier to estimate range of d f in E n+1 space than in the XY plane. Based on the extreme value theory of multivariable functions, the estimated range of d f in AASEE is derived.

With more accurate dmax and dmin, f z and $\stackrel{̂}{d}$ can be calculated more precisely, and AASEE can achieve a refined affine approximation result.

In the next sections, the estimated range of d f will be derived firstly, and the two parts will be derived later.

## 4 AASEE for multiplication

### 4.1 Estimated range of the difference

For multiplication, which is expressed as (13), the value of z at the point z α is
${z}^{\alpha }=\left({x}_{0}+\sum _{i=1}^{n}{x}_{i}{\epsilon }_{i}^{\alpha }\right)\left({y}_{0}+\sum _{i=1}^{n}{y}_{i}{\epsilon }_{i}^{\alpha }\right).$
(21)
The partial derivatives of z with respect to the variable ε i at the point z α are
$z{\prime }_{{\epsilon }_{i}}=\left({x}_{i}\left({y}_{0}+\sum _{j=1}^{n}{y}_{j}{\epsilon }_{j}^{\alpha }\right)+{y}_{i}\left({x}_{0}+\sum _{j=1}^{n}{x}_{j}{\epsilon }_{j}^{\alpha }\right)\right).$
(22)
Upon substitution for z α and $z{\prime }_{{\epsilon }_{i}}$, the tangent hyperplane ${f}_{{z}^{\alpha }}$ can be expressed as
(23)
The difference between the tangent hyperplane ${f}_{{z}^{\alpha }}$ and (n + 1)-dimensional quadratic surface z is
$\begin{array}{ll}{d}_{f}& =z-{f}_{{z}^{\alpha }}=\sum _{i,j=1}^{n}{x}_{i}{y}_{j}\left({\epsilon }_{i}-{\epsilon }_{i}^{\alpha }\right)\left({\epsilon }_{j}-{\epsilon }_{j}^{\alpha }\right),\\ \text{where}\phantom{\rule{2em}{0ex}}{\epsilon }_{i},{\epsilon }_{j},{\epsilon }_{i}^{\alpha },{\epsilon }_{j}^{\alpha }=\phantom{\rule{0.3em}{0ex}}\left[-1,1\right].\end{array}$
(24)
Suppose demax and demin denote the estimated maximum and minimum of the function value at the domain boundary respectively, and dfimax and dfimin denote the local maxima and the local minima, respectively. The estimated maximum and minimum of multivariable function d f , dmax and dmin, can be expressed as
${d}_{max}=max\left({d}_{\text{emax}},{d}_{\text{fimax}}\right),$
(25)
${d}_{min}=min\left({d}_{\text{emin}},{d}_{\text{fimin}}\right).$
(26)
According to (24), the function value at the domain boundary, d fe , is represented by
$\begin{array}{ll}{d}_{\mathit{\text{fe}}}& =\sum _{i,j=1}^{n}{x}_{i}{y}_{j}\left[{\epsilon }_{i}{\epsilon }_{j}-{\epsilon }_{j}{\epsilon }_{i}^{\alpha }-{\epsilon }_{i}{\epsilon }_{j}^{\alpha }+{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha }\right]\\ \text{where}\phantom{\rule{2em}{0ex}}\exists {\epsilon }_{i}=±1,\forall i=1,2,\dots ,n.\end{array}$
(27)
To simplify, we observe the extreme case of ε i  = ±1. Under this case, for the first item, it is always positive when i = j. Hence, the estimated function value at the domain boundary, de, is expressed as
$\begin{array}{ll}{d}_{\mathrm{e}}& =\sum _{i,j=1,i=j}^{n}{x}_{i}{y}_{j}+\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha }+\sum _{i,j=1,i\ne j}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j}\\ \phantom{\rule{1em}{0ex}}-\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{j}{\epsilon }_{i}^{\alpha }-\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j}^{\alpha }\\ \text{where}\phantom{\rule{2em}{0ex}}\forall {\epsilon }_{i}=±1.\end{array}$
(28)
Hence, the maximum and minimum of de, demax and demin are derived as
$\begin{array}{ll}{d}_{\text{emax}}& =\sum _{i=1}^{n}{x}_{i}{y}_{i}+\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha }+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|\\ \phantom{\rule{1em}{0ex}}+\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }|+\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{j}^{\alpha }|\end{array}$
(29)
$\begin{array}{ll}{d}_{\text{emin}}& =\sum _{i=1}^{n}{x}_{i}{y}_{i}+\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha }-\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|\\ \phantom{\rule{1em}{0ex}}-\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }|-\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{j}^{\alpha }|.\end{array}$
(30)
To simply compare, dfimax and dfimin in (25) and (26) can be expressed as
$\begin{array}{ll}{d}_{\text{fimax}}& =\sum _{i=1}^{n}{x}_{i}{y}_{i}{\epsilon }_{i}^{2}+\sum _{i,j=1,i\ne j}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j}+\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j}^{\alpha }\\ \phantom{\rule{1em}{0ex}}+\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{j}{\epsilon }_{i}^{\alpha }+\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha },\end{array}$
(31)
$\begin{array}{ll}{d}_{\text{fimin}}& =\sum _{i=1}^{n}{x}_{i}{y}_{i}{\epsilon }_{i}^{2}+\sum _{i,j=1,i\ne j}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j}+\sum _{i=1}^{n}{x}_{i}{y}_{i}{\epsilon }_{i}\left({\epsilon }_{i}^{\alpha }+{\epsilon }_{j}^{\alpha }\right)\\ \phantom{\rule{1em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\sum _{i,j=1,i\ne j}^{n}\phantom{\rule{0.3em}{0ex}}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j}^{\alpha }+\phantom{\rule{0.3em}{0ex}}\sum _{i,j=1,i\ne j}^{n}\phantom{\rule{0.3em}{0ex}}{x}_{i}{y}_{j}{\epsilon }_{j}{\epsilon }_{i}^{\alpha }+\phantom{\rule{0.3em}{0ex}}\sum _{i,j=1}^{n}\phantom{\rule{0.3em}{0ex}}{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha },\\ \text{where}\phantom{\rule{2em}{0ex}}{\epsilon }_{i},{\epsilon }_{j}=\left(-1,1\right),\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{\epsilon }_{i}^{\alpha },\phantom{\rule{1em}{0ex}}{\epsilon }_{j}^{\alpha }=\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}-1,1\right].\end{array}$
(32)
As the example in Section 3, Figure 2 shows the function of d f  = -6(ε1-0.1)2-29(ε1-0.1)(ε2-0.1) + 5(ε2-0.1)2 when ${\epsilon }_{1}^{\alpha }=0.1$ and ${\epsilon }_{2}^{\alpha }=0.1$. The estimated maximum and minimum of d f at the domain boundary, demax and demin, are also marked in the figure. Since the value of ε i in (27) are substituted by ε i  = ±1, demax is larger than the maximum of d f and demin is smaller than the minimum. Figure 2 d f , d emax , and d emin of the example in Section 3.

The extreme value theory of multivariable functions is used to compare demax, dfimax, demin, and dfimin.

Hessian matrix of function ${d}_{f}=\sum _{i,j=1}^{n}{x}_{i}{y}_{j}\left({\epsilon }_{i}-{\epsilon }_{i}^{\alpha }\right)\left({\epsilon }_{j}-{\epsilon }_{j}^{\alpha }\right)$ is
$\begin{array}{ll}\mathbit{H}& =\left[\begin{array}{cccc}\frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{1}^{2}}& \frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{1}{\epsilon }_{2}}& \cdots & \frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{1}{\epsilon }_{n}}\\ \frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{2}{\epsilon }_{1}}& \frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{2}^{2}}& \cdots & \frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{2}{\epsilon }_{n}}\\ \cdots & \cdots & \cdots & \cdots \\ \cdots & \cdots & \cdots & \cdots \\ \frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{n}{\epsilon }_{1}}& \frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{n}{\epsilon }_{2}}& \cdots & \frac{{\partial }^{2}{d}_{f}}{\partial {\epsilon }_{n}^{2}}\end{array}\right]\\ =\left[\begin{array}{cccc}2{x}_{1}{y}_{1}& {x}_{1}{y}_{2}+{x}_{2}{y}_{1}& {x}_{1}{y}_{3}+{x}_{3}{y}_{1}& \cdots \\ {x}_{1}{y}_{2}+{x}_{2}{y}_{1}& 2{x}_{2}{y}_{2}& {x}_{2}{y}_{3}+{x}_{3}{y}_{2}& \cdots \\ {x}_{1}{y}_{3}+{x}_{3}{y}_{1}& {x}_{2}{y}_{3}+{x}_{3}{y}_{2}& 2{x}_{3}{y}_{3}& \cdots \\ \cdots & \cdots & \cdots & \cdots \\ \cdots & \cdots & \cdots & \cdots \\ {x}_{1}{y}_{n}+{x}_{n}{y}_{1}& {x}_{2}{y}_{n}+{x}_{n}{y}_{2}& {x}_{3}{y}_{n}+{x}_{n}{y}_{3}& \cdots \end{array}\right].\end{array}$
(33)

From (33), we can see that H is independent of ε i . It is a expression of x i and y i . This means that H is same for all the points in the domain.

To determine if H is positive or negative definite or semidefinite, its principal minor determinants are derived as
$\phantom{\rule{1.5pt}{0ex}}{D}_{0}=2{x}_{i}{y}_{i}$
(34)
$\begin{array}{ll}{D}_{1}& =\left|\begin{array}{cc}2{x}_{i}{y}_{i}& {x}_{i}{y}_{j}+{x}_{j}{y}_{i}\\ {x}_{i}{y}_{j}+{x}_{j}{y}_{i}& 2{x}_{j}{y}_{j}\end{array}\right|=-{\left({x}_{i}{y}_{j}-{x}_{j}{y}_{i}\right)}^{2}\end{array}$
(35)
$\begin{array}{ll}{D}_{2}& ={D}_{3}=\cdots ={D}_{n}=0,\\ \text{where}\phantom{\rule{2em}{0ex}}1\le i
(36)
As introduced in Section 2.4, H is a positive semidefinite matrix, iff it satisfies
$\forall {x}_{i}{y}_{i}\ge 0,\phantom{\rule{0.3em}{0ex}}\forall {x}_{i}{y}_{j}={x}_{j}{y}_{i},\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{0.3em}{0ex}}1\le i
(37)
H is a negative semidefinite matrix, iff it satisfies
$\forall {x}_{i}{y}_{i}\ge 0,\phantom{\rule{0.3em}{0ex}}\exists {x}_{i}{y}_{j}\ne {x}_{j}{y}_{i},\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{0.3em}{0ex}}1\le i
(38)
If it satisfies neither (37) nor (38), which means it satisfies (39), H is an indefinite matrix as
$\exists {x}_{i}{y}_{i}<0,\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{0.3em}{0ex}}1\le i\le n.$
(39)

According to (37), (38), and (39), we can compare demax, demin, dfimax, and dfimin, which are expressed as (29), (30), (31), and (32), respectively. Based on (25) and (26), dmax and dmin can be identified.

#### Lemma 1.

The estimated maximum of function d f , d max equals to the estimated maximum of the function value at the domain boundary, and the estimated minimum of function d f , d min equals to the estimated minimum of the function value at the domain boundary. This can be expressed as
${d}_{\mathit{\text{max}}}={d}_{\mathit{\text{emax}}}\phantom{\rule{2em}{0ex}}{d}_{\mathit{\text{min}}}={d}_{\mathit{\text{emin}}}.$
(40)

#### Proof.

There are two cases to consider, as x i y i  < 0 and x i y i  ≥ 0.

For x i y i  < 0, (39) is satisfied and H is indefinite. The stationary point is a saddle point, such as the point P in Figure 2. Neither dfimax nor dfimin exists in d f , that is,
${d}_{max}={d}_{\text{emax}}\phantom{\rule{2em}{0ex}}{d}_{min}={d}_{\text{emin}}.$
(41)

According to (41), Lemma 1 can be proven in this case.

For x i y i  ≥ 0, H may be positive semidefinite or negative semidefinite. d f may have local minima or local maxima under this condition.

As ε i  = [-1, 1], the following inequalities are established:
$\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|\ge ±\sum _{i,j=1,i\ne j}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j},$
(42)
$\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }|\ge ±\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j}^{\alpha },$
(43)
$\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{j}^{\alpha }|\ge ±\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{j}{\epsilon }_{i}^{\alpha }.$
(44)
If a local maximum lies at z α , the difference between demax and dfimax is
${d}_{\text{emax}}-{d}_{\text{fimax}}\ge \sum _{i=1}^{n}{x}_{i}{y}_{i}\left(1-{\epsilon }_{i}^{2}\right).$
(45)
x i y i  ≥ 0, there exists
${d}_{\text{emax}}\ge {d}_{\text{fimax}}.$
(46)
According to (25) and (46), we can prove that
${d}_{max}={d}_{\text{emax}}.$
(47)
Similarly, if a local minimum lies at z α , the difference between demin and dfimin is
$\begin{array}{ll}{d}_{\text{emin}}-{d}_{\text{fimin}}& \le -\sum _{i=1}^{n}{x}_{i}{y}_{i}\left({\epsilon }_{i}^{2}+{\epsilon }_{i}\left({\epsilon }_{i}^{\alpha }+{\epsilon }_{j}^{\alpha }\right)+1\right)\\ \le -\sum _{i=1}^{n}{x}_{i}{y}_{i}{\left({\epsilon }_{i}+1\right)}^{2}.\end{array}$
(48)
As x i y i  ≥ 0 in (48), the inequality (49) can be proven:
${d}_{\text{emin}}\le {d}_{\text{fimin}}.$
(49)
According to (26) and (49), we can prove that
${d}_{min}={d}_{\text{emin}}.$
(50)

As (47) and (50) are established, Lemma 1 can be proven in the case of x1y1 ≥ 0.

Combining these two cases, Lemma 1 is proven.

According to Lemma 1, dmax and dmin at a point z α can be computed as demax and demin in (29) and (30).

### 4.2 Expression of the approximate affine form in AASEE

#### Lemma 2.

When f z represents a tangent hyperplane at the point z0 = z0 = (0, 0, …, 0), it satisfies (20).

#### Proof.

According to Lemma 1, (29), and (30), the maximum absolute error of d f is
$\begin{array}{ll}{e}_{\mathrm{a}}& =|\sum _{i=1}^{n}{x}_{i}{y}_{i}|+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|+\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }|\\ \phantom{\rule{1em}{0ex}}+\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{j}^{\alpha }|+|\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha }|.\end{array}$
(51)
So the maximum absolute error between the tangent hyperplane ${f}_{{z}^{0}}$ at the point z0 = z0 = (0, 0, …, 0) and (n + 1)-dimensional quadratic surface z is
$\begin{array}{l}{e}_{\mathrm{a}}\left({z}^{0}\right)=|\sum _{i=1}^{n}{x}_{i}{y}_{i}|+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|.\end{array}$
(52)
Suppose that there is another point z α  ≠ z0, which is typically represented by z α  = (ε1, ε2, …, ε n ), where ε i  = [-1, 1], and ε i cannot be equal to 0 for all i, i = 1 … n. The maximum absolute error between the tangent hyperplane ${f}_{{z}^{\alpha }}$ at point z α and (n + 1)-dimensional quadratic surface $\stackrel{̂}{x}ŷ$ is
$\begin{array}{ll}{e}_{\mathrm{a}}\left({z}^{\alpha }\right)& =|\sum _{i=1}^{n}{x}_{i}{y}_{i}|+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|+\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }|\\ \phantom{\rule{1em}{0ex}}+\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{j}^{\alpha }|+|\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha }|.\end{array}$
(53)
ea(z α ) and ea(z0) can be compared by
$\begin{array}{ll}{e}_{\mathrm{a}}\left({z}^{0}\right)-{e}_{\mathrm{a}}\left({z}^{\alpha }\right)=& -\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }|\\ \phantom{\rule{1em}{0ex}}-\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}{\epsilon }_{j}^{\alpha }|-|\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}^{\alpha }{\epsilon }_{j}^{\alpha }|\le 0.\end{array}$
(54)

Because ea(z0) ≤ ea(z α ), the tangent hyperplane ${f}_{{z}^{0}}$ at the point z0 = z0 = (0, 0, …, 0) is the tangent hyperplane whose maximum absolute error is minimized.

It is proven that the chosen f z is a tangent hyperplane at the point z0 = z0 = (0, 0, …, 0).

According to Lemma 2, f z of AASEE denotes the tangent hyperplane at the point z0 = (0, 0, …, 0) and can be expressed as
${f}_{z}={x}_{0}{y}_{0}+{x}_{0}\sum _{i=1}^{n}{y}_{i}{\epsilon }_{i}+{y}_{0}\sum _{i=1}^{n}{x}_{i}{\epsilon }_{i}.$
(55)

This f z is the same as the f z s in AATRE and AACHA.

### 4.3 Expression of the equivalent affine form in AASEE

According to (55), the d f between the tangent hyperplane ${f}_{{z}^{0}}$ and the quadratic surface is
${d}_{f}=\sum _{i,j=1}^{n}{x}_{i}{y}_{j}{\epsilon }_{i}{\epsilon }_{j}.$
(56)
According to Lemma 1, (29), and (30), the estimated maximum and estimated minimum of d f , dmax and dmin can be expressed as
$\begin{array}{lll}{d}_{max}& =& {d}_{\text{emax}}=\sum _{i=1}^{n}{x}_{i}{y}_{i}+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|\\ {d}_{min}& =& {d}_{\text{emin}}=\sum _{i=1}^{n}{x}_{i}{y}_{i}-\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|.\end{array}$
(57)
n = 1 is a special case and dmax and dmin can be optimized as
$\begin{array}{ll}{d}_{max}& =\left\{\begin{array}{l}{x}_{1}{y}_{1},\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1,{x}_{1}{y}_{1}\ge 0\\ 0,\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1,{x}_{1}{y}_{1}\le 0\end{array}\right\\end{array}$
(58)
$\begin{array}{ll}{d}_{min}& =\left\{\begin{array}{l}0,\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1,{x}_{1}{y}_{1}\ge 0\\ {x}_{1}{y}_{1},\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1,{x}_{1}{y}_{1}\le 0.\end{array}\right\\end{array}$
(59)
By combining the two cases, demax and demin are rewritten as
$\begin{array}{l}{d}_{max}=\left\{\begin{array}{l}\sum _{i=1}^{n}{x}_{i}{y}_{i}+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|,\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n>1\\ {x}_{1}{y}_{1},\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1,\phantom{\rule{1em}{0ex}}{x}_{1}{y}_{1}\ge 0\\ 0,\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1,\phantom{\rule{1em}{0ex}}{x}_{1}{y}_{1}<0\end{array}\right\\end{array}$
(60)
$\begin{array}{l}{d}_{min}=\left\{\begin{array}{l}\sum _{i=1}^{n}{x}_{i}{y}_{i}-\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|,\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n>1\\ 0,\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1,\phantom{\rule{1em}{0ex}}{x}_{1}{y}_{1}\ge 0\\ {x}_{1}{y}_{1},\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1,\phantom{\rule{1em}{0ex}}{x}_{1}{y}_{1}<0.\end{array}\right\\end{array}$
(61)
When n > 1, the range of $\stackrel{̂}{d}$ can be expressed as
$\left[\sum _{i=1}^{n}{x}_{i}{y}_{i}-\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|,\sum _{i=1}^{n}{x}_{i}{y}_{i}+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|\right].$
(62)
According to (11), the affine form of $\stackrel{̂}{d}$ can be expressed as
$\stackrel{̂}{d}=\sum _{i=1}^{n}{x}_{i}{y}_{i}+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|{\epsilon }_{n+1}.$
(63)
When n = 1, the range of $\stackrel{̂}{d}$ can be expressed as
$\left[\phantom{\rule{0.3em}{0ex}}{x}_{1}{y}_{1},0\right]\phantom{\rule{2em}{0ex}}\text{or}\phantom{\rule{2em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}0,{x}_{1}{y}_{1}\right].$
(64)
The affine form of $\stackrel{̂}{d}$ can be expressed as
$\stackrel{̂}{d}=\frac{1}{2}{x}_{1}{y}_{1}+\frac{1}{2}|{x}_{1}{y}_{1}|{\epsilon }_{2}.$
(65)

### 4.4 Formulary of AASEE

According to (12), the affine form of AASEE for multiplication is
$\begin{array}{ll}\stackrel{̂}{z}& =\phantom{\rule{0.3em}{0ex}}{f}_{z}+\stackrel{̂}{d}={x}_{0}{y}_{0}+{x}_{0}\sum _{i=1}^{n}{y}_{i}{\epsilon }_{i}+{y}_{0}\sum _{i=1}^{n}{x}_{i}{\epsilon }_{i}\\ \phantom{\rule{1em}{0ex}}+\sum _{i=1}^{n}{x}_{i}{y}_{i}+\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|{\epsilon }_{n+1}\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n>1,\end{array}$
(66)
$\begin{array}{ll}\stackrel{̂}{z}& =\phantom{\rule{0.3em}{0ex}}{f}_{z}+\stackrel{̂}{d}={x}_{0}{y}_{0}+\left({x}_{0}{y}_{1}+{y}_{0}{x}_{1}\right){\epsilon }_{1}+\frac{1}{2}{x}_{1}{y}_{1}\\ \phantom{\rule{1em}{0ex}}+\frac{1}{2}|{x}_{1}{y}_{1}|{\epsilon }_{2}\phantom{\rule{2em}{0ex}}\text{for}\phantom{\rule{1em}{0ex}}n=1.\end{array}$
(67)

It is impossible to obtain the exact affine form for multiplication in AA. The result of multiplication must be approximated to an affine form. Using ε i as the input arguments, the uncertainty of multiplication in AASEE is reduced. The proposed f z is the most closed to the result of multiplication among all the possible approximate affine forms, and the upper and lower bounds of $\stackrel{̂}{d}$ in AASEE are much closer to true bounds of d f . Hence, the uncertainty in AASEE is smaller than that in AATRE and AACHA. Formed by such f z and $\stackrel{̂}{d}$, AASEE creates a refined affine form of multiplication.

## 5 Comparison of AASEE to AATRE and AACHA

### 5.1 Computational complexity

The computational complexity of an expression is determined by its most complex item. For n > 1, the most complex item is the coefficient of εn+1. To make the analysis convenient, we transform this coefficient:
$\begin{array}{ll}\sum _{i,j=1,i\ne j}^{n}|{x}_{i}{y}_{j}|& =\sum _{i,j=1}^{n}|{x}_{i}{y}_{j}|-\sum _{i=1}^{n}|{x}_{i}{y}_{i}|\\ =\sum _{i,j=1}^{n}|{x}_{i}|\sum _{i,j=1}^{n}|{y}_{j}|-\sum _{i=1}^{n}|{x}_{i}{y}_{i}|.\end{array}$
(68)

The computational complexity of the minuend is O(M1), where M1 is defined in Section 2.3, while the computational complexity of the subtrahend is less than O(M1).

Hence, the computational complexity of AASEE is O(M1). We can see that it is the same as that of AATRE and is lower than that of AACHA.

### 5.2 Accuracy

The accuracy of $\stackrel{̂}{d}$ is influential to the accuracy of the affine approximation methods of multiplication. The more accurate $\stackrel{̂}{d}$ will lead to a more accurate the affine approximation result.

For AATRE, $\stackrel{̂}{d}=\sum _{i=1}^{n}|{x}_{i}|\sum _{i=1}^{n}|{y}_{i}|{\epsilon }_{n+1}$. In this method, the same noise symbol of different variables is considered to be independent. The range of this $\stackrel{̂}{d}$ is
$\left[-\sum _{i=1}^{n}|{x}_{i}|\sum _{i=1}^{n}|{y}_{i}|,\sum _{i=1}^{n}|{x}_{i}|\sum _{i=1}^{n}|{y}_{i}|\right].$
(69)

It is much larger than the range of $\stackrel{̂}{d}$ by AASEE, which is expressed in (62) and (64).

In AACHA, $\stackrel{̂}{d}=\frac{a+b}{2}+\frac{b-a}{2}{\epsilon }_{n+1}$, where a and b are represented the estimated range of $\stackrel{̂}{d}$. In this method, a polygon in XY plane is used to find a and b. The domain of $\stackrel{̂}{x}ŷ$ is bounded by the polygon. However, the polygon is larger than the true domain, and all the same noise symbols of different variables are not taken into account together.

All the same noise symbols of different variables are considered together by $\stackrel{̂}{d}$ of AASEE. It is more accurate than $\stackrel{̂}{d}$ of AATRE. In the most cases, it is more accurate than $\stackrel{̂}{d}$ of AACHA, too.

## 6 Case studies

The following nonlinear system cases are used to demonstrate the efficiency of the proposed refined affine form of multiplication. These cases are commonly used in signal processing. The first two cases are univariate cases and come from . The rest of cases are multivariate polynomial functions and come from .

### 6.1 Introduction of the cases

Case 1. Polynomial approximation. The first case study is that degree-four polynomial for the approximation of y = ln(1 + x), where x = [0,1]. Horner’s rule evaluates the polynomial
$y=\left(\left(\left(-0.0550x+0.2168\right)x-0.4645\right)x+0.9956\right)x+0.0001,$

where the coefficients are obtained by polynomial curve fitting technique.

Case 2. B-splines Uniform cubic B-splines are commonly used for image warping . Basic functions B0, B1, B2, and B3 in B-spline are defined as
$\begin{array}{ll}{B}_{0}\left(u\right)& =\frac{1}{6}{\left(1-u\right)}^{3},\phantom{\rule{1em}{0ex}}{B}_{1}\left(u\right)=\frac{1}{6}\left(3{u}^{3}-6{u}^{2}+4\right),\\ {B}_{2}\left(u\right)& =\frac{1}{6}\left(-3{u}^{3}+3{u}^{2}+3u+1\right),\phantom{\rule{1em}{0ex}}{B}_{3}\left(u\right)=\frac{-{u}^{3}}{6},\end{array}$

where u = [0, 1].

Case 3. Multivariate polynomial functions. In the third case, eight multivariate polynomial functions are examined. They are as follows:
1. 1.
Savitzky-Golay filter:
$\begin{array}{ll}{f}_{1}\left(\mathbit{X}\right)& =7{x}_{1}^{3}-984{x}_{2}^{3}-76{x}_{1}^{2}{x}_{2}+92{x}_{1}{x}_{2}^{2}+7{x}_{1}^{2}\\ \phantom{\rule{1em}{0ex}}-39{x}_{1}{x}_{2}-46{x}_{2}^{2}+7{x}_{1}-46{x}_{2}-75\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathbit{X}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}-2,2\right]}^{2}\end{array}$

2. 2.
Image rejection unit:
$\begin{array}{ll}{f}_{2}\left(\mathbit{X}\right)& =16384\left({x}_{1}^{4}+{x}_{2}^{4}\right)+64767\left({x}_{1}^{2}-{x}_{2}^{2}\right)+{x}_{1}-{x}_{2}\\ \phantom{\rule{1em}{0ex}}+57344{x}_{1}{x}_{2}\left({x}_{1}-{x}_{2}\right)\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathbit{X}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}0,1\right]}^{2}\end{array}$

3. 3.
A random function:
$\begin{array}{ll}{f}_{3}\left(\mathbit{X}\right)& =\left({x}_{1}-1\right)\left({x}_{1}+2\right)\left({x}_{2}+1\right)\left({x}_{2}-2\right){x}_{3}^{2}\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathbit{X}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}-2,2\right]}^{3}\end{array}$

4. 4.
Mitchell function:
$\begin{array}{ll}{f}_{4}\left(\mathbit{X}\right)& =4\left[{x}_{1}^{4}+{\left({x}_{2}^{2}+{x}_{3}^{2}\right)}^{2}\right]+17{x}_{1}^{2}\left({x}_{2}^{2}+{x}_{3}^{2}\right)\\ \phantom{\rule{1em}{0ex}}-20\left({x}_{1}^{2}+{x}_{2}^{2}+{x}_{3}^{2}\right)+17\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathbit{X}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}-2,2\right]}^{3}\end{array}$

5. 5.
Matyas function:
$\begin{array}{ll}{f}_{5}\left(\mathbit{X}\right)& =0.26\left({x}_{1}^{2}+{x}_{2}^{2}\right)-0.48{x}_{1}{x}_{2}\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathbit{X}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}-100,100\right]}^{2}\end{array}$

6. 6.
Three-hump function:
$\begin{array}{ll}{f}_{6}\left(\mathbit{X}\right)& =12{x}_{1}^{2}-6.3{x}_{1}^{4}+{x}_{1}^{6}+6{x}_{2}\left({x}_{2}-{x}_{1}\right)\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathbit{X}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}-10,10\right]}^{2}\end{array}$

7. 7.
Goldstein-Price function:

8. 8.
Ratscheck function:
$\begin{array}{ll}{f}_{8}\left(\mathbit{X}\right)& =4{x}_{1}^{2}-2.1{x}_{1}^{4}+\frac{1}{3}{x}_{1}^{6}+{x}_{1}{x}_{2}-4{x}_{2}^{2}+4{x}_{2}^{4}\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathbit{X}=\phantom{\rule{0.3em}{0ex}}{\left[\phantom{\rule{0.3em}{0ex}}-100,100\right]}^{2}\end{array}$

### 6.2 Analysis of case 1

For the input range x = [0, 1], equivalent affine form is $\stackrel{̂}{x}=0.5+0.5{\epsilon }_{1}$. For case 1, the intermediate and output signals are defined as
$\begin{array}{ll}\hfill {y}_{1}& =-0.0550x+0.2168,\phantom{\rule{1em}{0ex}}{y}_{2}={y}_{1}x-0.4645,\\ \hfill {y}_{3}& ={y}_{2}x+0.9956,\phantom{\rule{1em}{0ex}}y={y}_{3}x+0.0001.\end{array}$
(70)
Using AATRE, the affine forms of intermediate and output are
$\begin{array}{ll}\hfill {y}_{1}& =0.1893-0.0275{\epsilon }_{1},\\ \hfill {y}_{2}& =-0.36985+0.0809{\epsilon }_{1}+0.01375{\epsilon }_{2},\\ \hfill {y}_{3}& =0.81068-0.14448{\epsilon }_{1}+0.00688{\epsilon }_{2}+0.04733{\epsilon }_{3},\\ \hfill y& =0.4054+0.3331{\epsilon }_{1}\phantom{\rule{0.3em}{0ex}}+0.0034{\epsilon }_{2}+0.0237{\epsilon }_{3}\phantom{\rule{0.3em}{0ex}}+0.0993{\epsilon }_{4}.\end{array}$
Using AACHA, the affine forms of intermediate and output are
$\begin{array}{ll}{y}_{1}& =0.1893-0.0275{\epsilon }_{1},\\ {y}_{2}& =-0.3768+0.0809{\epsilon }_{1}+0.0069{\epsilon }_{2},\\ {y}_{3}& =0.8291-0.1479{\epsilon }_{1}+0.0034{\epsilon }_{2}+0.0220{\epsilon }_{3},\\ \hfill y& =0.3761\phantom{\rule{0.3em}{0ex}}+0.3406{\epsilon }_{1}\phantom{\rule{0.3em}{0ex}}+0.0017{\epsilon }_{2}+0.0110{\epsilon }_{3}+0.0436{\epsilon }_{4}.\end{array}$
Using AASEE, the affine forms of intermediate and output are
$\begin{array}{ll}{y}_{1}& =0.1893-0.0275{\epsilon }_{1},\\ {y}_{2}& =-0.37673+0.0809{\epsilon }_{1}+0.00688{\epsilon }_{2},\\ {y}_{3}& =0.84769-0.14791{\epsilon }_{1}+0.00344{\epsilon }_{2}+0.00344{\epsilon }_{3},\\ \hfill y& =0.34999+0.34989{\epsilon }_{1}\phantom{\rule{0.3em}{0ex}}+0.00172\left({\epsilon }_{2}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}{\epsilon }_{3}\right)+0.00344{\epsilon }_{4}.\end{array}$
Table 1 shows the variable ranges and the range intervals, (ymax-ymin), of intermediates and output by the three methods. The true range of y lies in [0,0.6931], and the range interval of output is 0.6931. Suppose R(T), R(C), and R(A) are represented as the ratios of range interval obtained by AATRE, AACHA, and AASEE to the true range interval, respectively. The closer this ratio converges to 1, the more accurate the method is. In this case, as R(T) = 1.33, R(C) = 1.15, and R(A) = 1.03, we can see the range by AASEE is closer to the true range than AATRE and AACHA.

### 6.3 Comparison of range and computational complexity by the three cases

The output ranges by the three methods of case 2 and case 3 can be obtained according to the process of case 1.

Table 2 demonstrates the ranges and the integer word lengths by AASEE and comparison among AATRE, AACHA and AASEE. Column c.fun shows the case study and the function of the row. The true output ranges, which are used as reference values, are obtained by numerical method or nonlinear programming technique, which are time-consuming and are not practical to solve the true bounds for large number of signals. From the table, we can see that the ranges, which are derived by AASEE, cover the true ranges and they are smaller than those by AATRE, for all the functions. For these thirteen functions, the ranges, which are derived by AASEE, are smaller than those by AACHA for nine functions, and equal to those by AACHA for two functions. According to (1), the integer word length can be decided by the range. The integer word-length, which is derived by AASEE, is 2 b less than that by AATRE and 1 b less than that by AACHA, at most. Comparing with AATRE, AASEE and AACHA can save 0.54 b on average.

To calculate the estimated range of d f , the values of ε i  = ±1, i = 1, 2, …, n in (27) are substituted by ε i  = ±1 in AASEE. The difference between the estimated range and the true range of d f is introduced by this approximation. In most of the applications, the estimated ranges, which are computed by AASEE, are closer than those by AACHA. However, the estimated minimum and maximum of $\stackrel{̂}{x}ŷ$ on the boundary of the polygon are independent of the value of ε i . In some applications such as functions f2 and f8 in Table 2, the results by AASEE are almost the same as those by AACHA.

In Table 3, ratios of range intervals and the computational complexity are compared among AATRE, AACHA, and AASEE. The computational complexity is calculated from the numbers of multiplications and additions. For AACHA, the extreme value of a quadratic function in one variable on a bounded interval needs to be calculated. Nm, Na, and Ne denote the numbers of multiplications, additions and the extreme value computations of each case, respectively. Table 3 shows that R(T) values are from 1.04 to 281.2, R(C) are from 1.03 to 233.7, and R(A) are from 1.03 to 192.9. The ratios of R(A) to R(T) and R(C) show the accuracy of AASEE compared to AATRE and AACHA, respectively. The average ratios can be used to evaluate the accuracy of the affine approximation methods. The ratios of R(A) to R(T) are from 0.18 to 0.99, and the average of these ratios is 0.59. The ratios of R(A) to R(C) are from 0.33 to 1.17, and the average of these ratios is 0.89. For these 13 cases, on average, the accuracy of AASEE is 1.69 times than that of AATRE and 1.12 times than that of AACHA. The extreme value computation, which is only necessary for AACHA, of the quadratic function is the most complex and time-consuming among the operations. Hence, the computational complexity of AACHA is much higher than that of AATRE and AASEE. The increase rate of the number of multiplications, Nm, by AASEE to AATRE is from 0.091 to 1.75, and the average is 0.450. The increase rate of the number of multiplications, Nm, by AASEE to AACHA is from 0.2 to 1.833, and the average is 0.567. The increase rate of the number of additions, Na, by AASEE to AATRE is from 0.05 to 3.4, and the average is 0.944. The increase rate of the number of additions, Na, by AASEE to AACHA is from 0 to 0.985, and the average is 0.157. The numbers of multiplications and additions of AASEE are increased a few. As shown in Table 3, AACHA is slightly more accurate for functions c3.f2 and c3.f8, but the computational complexity of AACHA is much higher than that of AASEE.

### 6.4 Comparison of the design cost by the three methods

To compare the design cost, the system area by the three methods, the fractional word lengths are obtained by the precise analysis in . Typically, we select the case of a random function of case 3, c3.f3, for this section. The design of c3.f3 is synthesized on Xilinx Xc2vp30-7ff896 FPGA device (Xilinx, San Jose, CA, USA).

Figure 3 shows the area variation for c3.f3 with increasing target precision. It can be seen that the area, which is calculated by AASEE, is less than that by AATRE and AACHA, and the area difference between them is increasing with the target precision. This difference is from 265 to 729 with the target precision increased. Such optimization of integer word length can save area. Figure 3 Area variation for c 3 . f 3 with increasing target precision.
Figure 4 shows the percentage area saving of AASEE over AATRE at different target precision for c3.f3. The percentage area saving is from 14.34% to 5.62% with the target precision increased. Generally, we obtain increased relative saving for lower precision. Figure 4 Percentage area saving of AASEE over AATRE at different target precision for c 3 . f 3 .

## 7 Conclusions

This paper presents a novel affine approximation method for multiplication, Approximation Affine based on Space Extreme Estimation. In this method, an extra noise symbol is added to an approximated affine form.

To reduce the uncertainty in AA, we derive this method in the (n + 1)-dimensional space E n + 1 . In space E n + 1 , approximate affine form can be regarded as the tangent hyperplane at a certain point of (n + 1)-dimensional space curved surface. Using the linear geometry, it is proven that the f z of AASEE is the closest to the result of multiplication among all the possible approximate affine forms. Taking ε i as the input arguments, all the same noise symbols of different variables are taken into account together. Hence, the uncertainty of $\stackrel{̂}{d}$ of AASEE is reduced. Based on the extreme value theory of multivariable functions, we can prove that the range of this $\stackrel{̂}{d}$ covers the true range of the difference introduced by approximation and much tighter than that by AATRE and AACHA.

The uncertainty in AASEE is much smaller than that in AATRE and AACHA on average. At the same time, the computational complexity of AASEE is the same as that of AATRE and lower than that of AACHA.

In the case studies, the accuracy of AASEE is 1.69 times than that of AATRE and 1.12 times than that of AACHA on average. The integer word length, which is derived by AASEE, is 2 b less than that by AATRE and 1 b less than that by AACHA, at most. For the case of c3.f3, the area, which is computed by AASEE, is less than that by AATRE and AACHA, and the percentage area saving of AASEE over AATRE is from 14.34% to 5.62% with the target precision increased.

## Authors’ Affiliations

(1)
Key Laboratory of Network Oriented Intelligent Computation, Harbin Institute of Technology Shenzhen Graduate School, Xili, Shenzhen, 518055, China

## References

Advertisement 