- Research
- Open Access
A refined affine approximation method of multiplication for range analysis in word-length optimization
- Ruiyi Sun^{1},
- Yan Zhang^{1}Email author and
- Aijiao Cui^{1}
https://doi.org/10.1186/1687-6180-2014-36
© Sun et al.; licensee Springer. 2014
- Received: 8 June 2013
- Accepted: 10 March 2014
- Published: 22 March 2014
Abstract
Affine arithmetic (AA) is widely used in range analysis in word-length optimization of hardware designs. To reduce the uncertainty in the AA and achieve efficient and accurate range analysis of multiplication, this paper presents a novel refined affine approximation method, Approximation Affine based on Space Extreme Estimation (AASEE). The affine form of multiplication is divided into two parts. The first part is the approximate affine form of the operation. In the second part, the equivalent affine form of the estimated range of the difference, which is introduced by the approximation, is represented by an extra noise symbol. In AASEE, it is proven that the proposed approximate affine form is the closest to the result of multiplication based on linear geometry. The proposed equivalent affine form of AASEE is more accurate since the extreme value theory of multivariable functions is used to minimize the difference between the result of multiplication and the approximate affine form. The computational complexity of AASEE is the same as that of trivial range estimation (AATRE) and lower than that of Chebyshev approximation (AACHA). The proposed affine form of multiplication is demonstrated with polynomial approximation, B-splines, and multivariate polynomial functions. In experiments, the average of the ranges derived by AASEE is 59% and 89% of that by AATRE and AACHA, respectively. The integer bits derived by AASEE are 2 and 1 b less than that by AATRE and AACHA at most, respectively.
Keywords
- Word-length optimization
- Range analysis
- Affine approximation
- Accuracy
- Uncertainty analysis
1 Introduction
As a method of representing real numbers, floating point can support a wide dynamic range and high precision of values. It has been thus commonly used in signal processing, such as image processing, speech processing, and digital signals processing, to represent signals. When these applications are implemented on hardware for high speed and stability, the signals need to be represented in fixed point to optimize the performance of area, power, and speed of the hardware. Hence, the values in floating-point need to be converted to those in fixed point. This process is named as word-length optimization. Its goal is to achieve optimal system performance while satisfying the specification on the system output precision. Word-length optimization involves range analysis and precision analysis. The former one is to find the minimum word length of the integer part of the value, while the latter one focuses on the optimization of the fractional part of the word length.
Word-length optimization has been proven to be an NP-hard problem [1]. It can be usually classified into dynamic analysis [2–7] and static analysis [8–20]. By analyzing a large set of stimuli signals, dynamic analysis is applicable to all types of systems. However, it will take long time on simulation to provide sufficient confidence. Also, the precision for the signals without simulation cannot be guaranteed. Comparatively, the static analysis is an automated and efficient word-length optimization method and more applicable to large designs when compared to dynamic analysis. The static analysis mainly uses the characteristics of the input signals to estimate the word length conservatively, which can result in overestimation [12] to some extent. As a part of word-length optimization, the range analysis can also been classified in the same way.
Affine arithmetic (AA) [21] is often used for range analysis in static analysis. In AA, every signal must be represented in an affine form, which is a first-degree polynomial. As AA tracks the correlations among range intervals of signals, it can provide more accurate word-length range. This makes it suitable for range analysis of the result of linear operations. It is noted that besides linear operations, nonlinear operations, such as multiplication, are also involved in hardware operations, typically in linear time invariant (LTI) systems. AA cannot provide an exact affine form for nonlinear operations. To solve this problem, Stolfi and de Figueiredo [22] proposed affine approximation methods for multiplication, which include trivial range estimation (AATRE) and Chebyshev approximation (AACHA). AATRE is efficient for computation, but the range produced by it can be four times of real range at most. The accumulation of the uncertainty of all signals in the computational chain may result in an error explosion, which is unacceptable in application. Such overestimation obviously cannot satisfy the accuracy requirement of the system, which limits the application of AATRE in large systems. The uncertainty of AACHA is less than AATRE, however, it is too complex to be used in large systems. Since LTI operations are accurately covered by AA, the proposed method is applied in the field of the range analysis of word-length optimization in this paper.
A novel affine approximation method, Approximation Affine based on Space Extreme Estimation (AASEE), is proposed to reduce the uncertainty of multiplication and achieve an accurate and efficient range analysis of multiplication in this paper. To analyze the uncertainty conveniently, we use two parts to divide the different parts of all the approximation methods for multiplication, which include AATRE, AACHA, and AASEE. The first part is named as approximate affine form, which is approximated to the nonlinear operation. The second part is named as equivalent affine form, which is the equivalent affine form of the estimated range of the difference between the result of multiplication and the approximate affine form. The more accurate the two parts are, the more accurate the approximation method is. Based on linear geometry [23], it is proven that the proposed approximate affine form is the closest to the result of multiplication. To derive the equivalent affine form, we use the extreme value theory of multivariable functions [24] to estimate the upper and lower bounds of the difference in space, and the difference is introduced by the approximation of the first part. The uncertainty of the proposed method is minimized. The accuracy of the resulting affine form by AASEE is higher than that by AATRE and averagely higher than that by AACHA. Meanwhile, the computational complexity of AASEE is equivalent to that of AATRE and lower than that of AACHA.
The rest of this paper is organized as follows. Background of range analysis for multiplication is presented in Section 2. Section 3 presents the method of derivation of the two parts for multiplication. The refined affine form of multiplication, AASEE, is presented in next section. In Section 5, we compare the computational complexity and the accuracy among AASEE to AATRE and AACHA. The case studies and experimental results are demonstrated in Section 6. Section 7 concludes the paper.
2 Background
2.1 Related work
Interval arithmetic (IA) and affine arithmetic (AA) have been widely used in range analysis in word-length optimization.
IA [25] is a range arithmetic theory which is firstly presented by Moore in 1962. Cmar [2] employs it for range analysis of digital signal processing (DSP) systems. Carreras [20] presents a method based on IA. To reduce the oversized word length, the method provides the probability density functions that can be used when some truncation must be performed due to constraints in the specification. IA is not suitable for most real-world applications, since it could lead to drastic overestimation of the true range.
AA [21] is proposed to overcome the weakness of IA by Stolfi in 1993. In [8, 9], Fang uses AA to analyze word-length optimization. Both range and precision are represented by the same affine form, which limits the optimization. Pu and Ha [10] also use AA for word-length optimization. Simultaneously, they use two different affine forms for range analysis and precision analysis, respectively, and achieve more refined result of word-length optimization. Similarly, Lee et al. [11] develop an automatic optimization approach, which is called MiniBit, to produce accuracy-guaranteed solutions, and area is minimized while meeting an error constraint. Osborne [12] uses both IA and AA for range analysis for different situations. Computation using either of the two methods in the design is time-consuming. The problem of overestimation is serious due to the approximation of the nonlinear operations.
Since AA cannot be used in the systems with infinite number of loops, an improved approach, quantized AA (QAA), has been proposed in [13] for linear time-invariant systems with feedback loops. This method can provide fast and tight estimation of the evolution of large sets of numerical inputs, using only an affine-based simulation, but it does not provide the exact bounds.
AATRE [22] is adopted for multiplication in most of the works for the low computational complexity. But the uncertainty of the range by AATRE is very large. To adjust the trade-off between the accuracy of approximation and computational complexity, Zhang [14] introduces a new parameter N in the N-level simplified affine approximation (N-SAA). This method is faster than AACHA and more accurate than AATRE, but it is more complex than AATRE. Furthermore, it is troublesome to choose a suitable N. A method of range analysis is proposed by Pang [26]. This method combines methods of IA, AATRE, and arithmetic transform (AT); and the result of the method is more accurate than AATRE, while the CPU implementation time is longer than AATRE. To deal with applications from the scientific computing domain, Kinsman [17, 18] uses the computational methods based on Satisfiability Modulo Theory. Search efficiency of this method is improved leading to tighter bounds and thus smaller word length.
For all the existing methods, the accuracy of approximation is improved at the expense of the computational complexity. This paper presents an affine approximation method for multiplication, which achieves better trade-off between accuracy and computational complexity.
2.2 Range analysis
In (1), all the signals in the design are assumed to be expressed as signed numbers, and the sign bit is taken into account in IWL_{ x }. According to (1), once the range of a signal is decided, the integer part of word length of the signal can be derived.
2.3 Affine arithmetic
For the signal x, x_{0} is the central value, and ε_{ i } is the i th noise symbol. ε_{ i } denotes an independent uncertainty source that contributes to the total uncertainty of the signal x, and x_{ i } is its coefficient.
AA can keep correlations among the signals of the computational chain by contributing the sample noise symbol ε_{ i } to each signal [22].
For multiplication, AATRE and AACHA are typical approximation methods.
Suppose M_{1}= max(n_{1},n_{2}), in which n_{1} and n_{2} denote the number of the noise symbol, whose coefficient is nonzero, of $\widehat{x}$ and , respectively. The computational complexity of AATRE is O(M_{1}).
where a and b denote the minimum and the maximum of the range of $\left(\sum _{i=1}^{n}{x}_{i}{\epsilon}_{i}\right)\left(\sum _{i=1}^{n}{y}_{i}{\epsilon}_{i}\right)$. Suppose M_{2} = n_{1} + n_{2}. The complexity of computing the both extremal values, a and b, is O(M_{2} logM_{2}). As M_{1} ≤ M_{2}, the computational complexity of AATRE is lower than that of AACHA [22].
2.4 Extreme value theory
The proposed approximation is based on the extreme value theory of multivariable functions [24].
Here we use ${\mathit{H}}_{{f}^{\alpha}}$ to represent H at a point ${f}^{\alpha}=({x}_{1}^{\alpha},{x}_{2}^{\alpha},\cdots \phantom{\rule{0.3em}{0ex}},{x}_{n}^{\alpha})$ and ${\mathit{J}}_{{f}^{\alpha}}$ to represent J at a point f^{ α }.
A stationary point of f, f^{ α }, is a point where ${\mathit{J}}_{{f}^{\alpha}}=0$. ${\mathit{H}}_{{f}^{\alpha}}$ is indefinite when ${\mathit{H}}_{{f}^{\alpha}}$ is neither positive semidefinite nor negative semidefinite. If ${\mathit{H}}_{{f}^{\alpha}}$ is positive definite, then f^{ α } is a local minimum point. If ${\mathit{H}}_{{f}^{\alpha}}$ is negative definite, then f^{ α } is a local maximum point. If ${\mathit{H}}_{{f}^{\alpha}}$ is indefinite, then f^{ α } is neither a local maximum nor a local minimum. It is a saddle point. Otherwise, f^{ α } is not utilized in this paper.
The principal minor determinants are used to determine if a matrix is positive or negative definite or semidefinite.
It is necessary and sufficient for a positive semidefinite matrix that all the principal minor determinants of the matrix are nonnegative real numbers.
It is necessary and sufficient for a negative semidefinite matrix that all the odd order principal minor determinants of the matrix are non-positive real numbers and all the even order principal minor determinants of the matrix are nonnegative real numbers.
3 Derivation of the two parts for multiplication
The first three items of (13) form an affine form and the last term is a quadratic term. Its affine form can also be represented as (12).
In the existing affine approximation methods of AATRE and AACHA, d_{max} and d_{min} are estimated in the XY plane. In these methods, the same noise symbol of different variables is considered to be independent. Hence, the range of $\widehat{d}$ is much larger than that of d_{ f }. The difference between $\widehat{d}$ and d_{ f } will propagate to $\widehat{z}$ and result in uncertainty.
To describe the multiplication accurately, we use ε_{ i } as the input arguments and estimate the range of z in the (n+1)-dimensional space E^{ n+1 }. The (n + 1)-dimensional space E^{ n+1 } is labeled as (ε_{1}, …, ε_{ n }, z). In space E^{ n + 1 }, a first-degree polynomial function can be expressed as a (n + 1)-dimensional hyperplane and a nonlinear polynomial function denotes a (n + 1)-dimensional space curved surface. The approximate affine form in (10) denotes a (n + 1)-dimensional hyperplane in E^{ n+1 }. Each hyperplane in E^{ n+1 } can be viewed as a parallel translation of a tangent hyperplane at a certain point of (n + 1)-dimensional space curved surface. Hence, all possible approximate affine forms for z can be regarded as the (n + 1)-dimensional tangent hyperplanes at all points of (n + 1)-dimensional space curved surface in E^{ n+1 }. The translation amount is taken into account in d_{ f }, which is approximated by $\widehat{d}$. In space E^{ n+1 }, d_{ f } can be viewed as the function of the distance between the points of space curved surface and the tangent hyperplane.
In (18), $z{\prime}_{{\epsilon}_{n}}$ are the partial derivatives of z with respect to the variables ε_{ n } at the point z^{ α }.
The geometrical meaning of f_{ z } denotes the tangent hyperplane whose maximum absolute error is minimized.
f_{ z } is derived by the range of d_{ f }, while $\widehat{d}$ is the equivalent affine form of d_{ f }. It is very complex to compute the true range of d_{ f }. With $\widehat{d}$ in (11), the uncertainty in AA for nonlinear operations is generated due to the difference between the true range of d_{ f } and the estimated range of d_{ f }.
It is much tighter and easier to estimate range of d_{ f } in E^{ n+1 } space than in the XY plane. Based on the extreme value theory of multivariable functions, the estimated range of d_{ f } in AASEE is derived.
With more accurate d_{max} and d_{min}, f_{ z } and $\widehat{d}$ can be calculated more precisely, and AASEE can achieve a refined affine approximation result.
In the next sections, the estimated range of d_{ f } will be derived firstly, and the two parts will be derived later.
4 AASEE for multiplication
4.1 Estimated range of the difference
The extreme value theory of multivariable functions is used to compare d_{emax}, d_{fimax}, d_{emin}, and d_{fimin}.
From (33), we can see that H is independent of ε_{ i }. It is a expression of x_{ i } and y_{ i }. This means that H is same for all the points in the domain.
According to (37), (38), and (39), we can compare d_{emax}, d_{emin}, d_{fimax}, and d_{fimin}, which are expressed as (29), (30), (31), and (32), respectively. Based on (25) and (26), d_{max} and d_{min} can be identified.
Lemma 1.
Proof.
There are two cases to consider, as ∃x_{ i }y_{ i } < 0 and ∀x_{ i }y_{ i } ≥ 0.
According to (41), Lemma 1 can be proven in this case.
For ∀x_{ i }y_{ i } ≥ 0, H may be positive semidefinite or negative semidefinite. d_{ f } may have local minima or local maxima under this condition.
As (47) and (50) are established, Lemma 1 can be proven in the case of ∀x_{1}y_{1} ≥ 0.
Combining these two cases, Lemma 1 is proven.
According to Lemma 1, d_{max} and d_{min} at a point z^{ α } can be computed as d_{emax} and d_{emin} in (29) and (30).
4.2 Expression of the approximate affine form in AASEE
Lemma 2.
When f_{ z }represents a tangent hyperplane at the point z^{0} = z_{0} = (0, 0, …, 0), it satisfies (20).
Proof.
Because e_{a}(z^{0}) ≤ e_{a}(z^{ α }), the tangent hyperplane ${f}_{{z}^{0}}$ at the point z^{0} = z_{0} = (0, 0, …, 0) is the tangent hyperplane whose maximum absolute error is minimized.
It is proven that the chosen f_{ z } is a tangent hyperplane at the point z^{0} = z_{0} = (0, 0, …, 0).
This f_{ z } is the same as the f_{ z }s in AATRE and AACHA.
4.3 Expression of the equivalent affine form in AASEE
4.4 Formulary of AASEE
It is impossible to obtain the exact affine form for multiplication in AA. The result of multiplication must be approximated to an affine form. Using ε_{ i } as the input arguments, the uncertainty of multiplication in AASEE is reduced. The proposed f_{ z } is the most closed to the result of multiplication among all the possible approximate affine forms, and the upper and lower bounds of $\widehat{d}$ in AASEE are much closer to true bounds of d_{ f }. Hence, the uncertainty in AASEE is smaller than that in AATRE and AACHA. Formed by such f_{ z } and $\widehat{d}$, AASEE creates a refined affine form of multiplication.
5 Comparison of AASEE to AATRE and AACHA
5.1 Computational complexity
The computational complexity of the minuend is O(M_{1}), where M_{1} is defined in Section 2.3, while the computational complexity of the subtrahend is less than O(M_{1}).
Hence, the computational complexity of AASEE is O(M_{1}). We can see that it is the same as that of AATRE and is lower than that of AACHA.
5.2 Accuracy
The accuracy of $\widehat{d}$ is influential to the accuracy of the affine approximation methods of multiplication. The more accurate $\widehat{d}$ will lead to a more accurate the affine approximation result.
It is much larger than the range of $\widehat{d}$ by AASEE, which is expressed in (62) and (64).
In AACHA, $\widehat{d}=\frac{a+b}{2}+\frac{b-a}{2}{\epsilon}_{n+1}$, where a and b are represented the estimated range of $\widehat{d}$. In this method, a polygon in XY plane is used to find a and b. The domain of $\widehat{x}\u0177$ is bounded by the polygon. However, the polygon is larger than the true domain, and all the same noise symbols of different variables are not taken into account together.
All the same noise symbols of different variables are considered together by $\widehat{d}$ of AASEE. It is more accurate than $\widehat{d}$ of AATRE. In the most cases, it is more accurate than $\widehat{d}$ of AACHA, too.
6 Case studies
The following nonlinear system cases are used to demonstrate the efficiency of the proposed refined affine form of multiplication. These cases are commonly used in signal processing. The first two cases are univariate cases and come from [11]. The rest of cases are multivariate polynomial functions and come from [27–29].
6.1 Introduction of the cases
where the coefficients are obtained by polynomial curve fitting technique.
where u = [0, 1].
- 1.Savitzky-Golay filter:$\begin{array}{ll}{f}_{1}\left(\mathit{X}\right)& =7{x}_{1}^{3}-984{x}_{2}^{3}-76{x}_{1}^{2}{x}_{2}+92{x}_{1}{x}_{2}^{2}+7{x}_{1}^{2}\\ \phantom{\rule{1em}{0ex}}-39{x}_{1}{x}_{2}-46{x}_{2}^{2}+7{x}_{1}-46{x}_{2}-75\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathit{X}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}-2,2]}^{2}\end{array}$
- 2.Image rejection unit:$\begin{array}{ll}{f}_{2}\left(\mathit{X}\right)& =16384\left({x}_{1}^{4}+{x}_{2}^{4}\right)+64767\left({x}_{1}^{2}-{x}_{2}^{2}\right)+{x}_{1}-{x}_{2}\\ \phantom{\rule{1em}{0ex}}+57344{x}_{1}{x}_{2}({x}_{1}-{x}_{2})\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathit{X}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}0,1]}^{2}\end{array}$
- 3.A random function:$\begin{array}{ll}{f}_{3}\left(\mathit{X}\right)& =({x}_{1}-1)({x}_{1}+2)({x}_{2}+1)({x}_{2}-2){x}_{3}^{2}\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathit{X}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}-2,2]}^{3}\end{array}$
- 4.Mitchell function:$\begin{array}{ll}{f}_{4}\left(\mathit{X}\right)& =4\left[{x}_{1}^{4}+{\left({x}_{2}^{2}+{x}_{3}^{2}\right)}^{2}\right]+17{x}_{1}^{2}\left({x}_{2}^{2}+{x}_{3}^{2}\right)\\ \phantom{\rule{1em}{0ex}}-20\left({x}_{1}^{2}+{x}_{2}^{2}+{x}_{3}^{2}\right)+17\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathit{X}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}-2,2]}^{3}\end{array}$
- 5.Matyas function:$\begin{array}{ll}{f}_{5}\left(\mathit{X}\right)& =0.26({x}_{1}^{2}+{x}_{2}^{2})-0.48{x}_{1}{x}_{2}\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathit{X}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}-100,100]}^{2}\end{array}$
- 6.Three-hump function:$\begin{array}{ll}{f}_{6}\left(\mathit{X}\right)& =12{x}_{1}^{2}-6.3{x}_{1}^{4}+{x}_{1}^{6}+6{x}_{2}({x}_{2}-{x}_{1})\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathit{X}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}-10,10]}^{2}\end{array}$
- 7.Goldstein-Price function:$\begin{array}{ll}{f}_{7}\left(\mathit{X}\right)& =\left[\phantom{\rule{0.3em}{0ex}}1+{({x}_{1}+{x}_{2}+1)}^{2}\left(19-14{x}_{1}+3{x}_{1}^{2}-14{x}_{2}\right.\right.\\ \phantom{\rule{1em}{0ex}}\left.\left.\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}6{x}_{1}{x}_{2}+3{x}_{2}^{2}\right)\right]\times \left[30+{(2{x}_{1}-3{x}_{2})}^{2}\right.\\ \phantom{\rule{1em}{0ex}}\times \left.\left(18-32{x}_{1}+12{x}_{1}^{2}+48{x}_{2}-36{x}_{1}{x}_{2}+27{x}_{2}^{2}\right)\right]\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathit{X}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}-2,2]}^{2}\end{array}$
- 8.Ratscheck function:$\begin{array}{ll}{f}_{8}\left(\mathit{X}\right)& =4{x}_{1}^{2}-2.1{x}_{1}^{4}+\frac{1}{3}{x}_{1}^{6}+{x}_{1}{x}_{2}-4{x}_{2}^{2}+4{x}_{2}^{4}\\ \text{where the input range:}\phantom{\rule{1em}{0ex}}\mathit{X}=\phantom{\rule{0.3em}{0ex}}{[\phantom{\rule{0.3em}{0ex}}-100,100]}^{2}\end{array}$
6.2 Analysis of case 1
Comparison of ranges and range intervals for every variable of the three methods for case 1
Variable | AATRE | AACHA | AASEE | |||
---|---|---|---|---|---|---|
Range | Interval | Range | Interval | Range | Interval | |
y _{1} | [0.1618,0.2168] | 0.055 | [0.1618,0.2168] | 0.055 | [0.1618,0.2168] | 0.055 |
y _{2} | [-0.4645,-0.2752] | 0.1893 | [-0.4645,-0.2890] | 0.1755 | [-0.4645,-0.2890] | 0.1755 |
y _{3} | [0.6120,1.0094] | 0.3974 | [0.6558,1.0025] | 0.3467 | [0.6929,1.0025] | 0.3096 |
y | [-0.0541,0.8650] | 0.9191 | [-0.0253,0.7685] | 0.7938 | [-0.0068,0.7068] | 0.7136 |
6.3 Comparison of range and computational complexity by the three cases
The output ranges by the three methods of case 2 and case 3 can be obtained according to the process of case 1.
Comparison of analytical ranges and bits by the three methods
c.fun | True output | AATRE | AACHA | AASEE | ||||
---|---|---|---|---|---|---|---|---|
Range | Bits | Range | Bits | Range | Bits | Range | Bits | |
c_{1}.Y | [0,0.6931] | 1 | [-0.0541,0.8650] | 1 | [-0.0253,0.7685] | 1 | [-0.0068,0.7068] | 1 |
c_{2}.B_{0} | [0,0.17] | 1 | [-0.13,0.17] | 1 | [-0.05,0.17] | 1 | [-0.02,0.17] | 1 |
c_{2}.B_{1} | [0.17,0.67] | 1 | [-0.33,1.29] | 2 | [-0.05,0.98] | 1 | [0.10,0.92] | 1 |
c_{2}.B_{2} | [0.17,0.67] | 1 | [-0.21,1.17] | 2 | [-0.02,0.89] | 1 | [0.04,0.73] | 1 |
c_{2}.B_{3} | [-0.17,0] | 1 | [-0.17,0.13] | 1 | [-0.17,0.05] | 1 | [-0.17,0.02] | 1 |
c_{3}.f_{1} | [-9453,9303] | 15 | [-9821,9671] | 15 | [-9793,9487] | 15 | [-9793,9487] | 15 |
c_{3}.f_{2} | [-5.51e 4,8.79e 4] | 18 | [-1.75e 5,1.79e 5] | 19 | [-0.95e 5,1.28e 5] | 18 | [-1.15e 5,1.41e 5] | 19 |
c_{3}.f_{3} | [-36,64] | 8 | [-256,256] | 10 | [-192,192] | 9 | [-64,64] | 8 |
c_{3}.f_{4} | [-8,641] | 11 | [-1087,1121] | 12 | [-223,881] | 11 | [-335,641] | 11 |
c_{3}.f_{5} | [0,10^{4}] | 15 | [-10^{4},10^{4}] | 15 | [-4800,10^{4}] | 15 | [-4800,10^{4}] | 15 |
c_{3}.f_{6} | [0,0.94e 6] | 21 | [-1.07e 6,1.07e 6] | 22 | [-0.06e 6,1.00e 6] | 21 | [-0.11e 6,0.94e 6] | 21 |
c_{3}.f_{7} | [3,1.01e 6] | 21 | [-1.42e 8,1.42e 8] | 29 | [-1.23e 8,1.13e 8] | 28 | [-9.87e 7,9.61e 7] | 28 |
c_{3}.f_{8} | [-1.03,3.3e 11] | 40 | [-3.3e 11,3.3e 11] | 40 | [-2.1e 8,3.3e 11] | 40 | [-4.2e 10,3.3e 11] | 40 |
To calculate the estimated range of d_{ f }, the values of ∃ε_{ i } = ±1, ∀i = 1, 2, …, n in (27) are substituted by ∀ε_{ i } = ±1 in AASEE. The difference between the estimated range and the true range of d_{ f } is introduced by this approximation. In most of the applications, the estimated ranges, which are computed by AASEE, are closer than those by AACHA. However, the estimated minimum and maximum of $\widehat{x}\u0177$ on the boundary of the polygon are independent of the value of ε_{ i }. In some applications such as functions f_{2} and f_{8} in Table 2, the results by AASEE are almost the same as those by AACHA.
Comparison of range ratios and computational complexity by the three methods
c.fun | AATRE | AACHA | AASEE | |||||||
---|---|---|---|---|---|---|---|---|---|---|
R(T) | N _{m} | N _{a} | R(C) | N _{e} | N _{m} | N _{a} | R(A) | N _{m} | N _{a} | |
c_{1}.Y | 1.33 | 17 | 20 | 1.15 | 18 | 14 | 17 | 1.03 | 20 | 21 |
c_{2}.B_{0} | 1.76 | 13 | 8 | 1.29 | 7 | 11 | 10 | 1.12 | 15 | 10 |
c_{2}.B_{1} | 3.24 | 20 | 11 | 2.06 | 7 | 18 | 13 | 1.64 | 22 | 13 |
c_{2}.B_{2} | 2.76 | 22 | 13 | 1.82 | 7 | 20 | 15 | 1.38 | 24 | 15 |
c_{2}.B_{3} | 1.76 | 13 | 7 | 1.29 | 7 | 17 | 9 | 1.12 | 15 | 9 |
c_{3}.f_{1} | 1.04 | 16 | 9 | 1.03 | 22 | 13 | 15 | 1.03 | 22 | 21 |
c_{3}.f_{2} | 2.48 | 42 | 54 | 1.56 | 16 | 36 | 51 | 1.79 | 46 | 59 |
c_{3}.f_{3} | 5.12 | 17 | 19 | 3.84 | 21 | 19 | 23 | 1.28 | 23 | 27 |
c_{3}.f_{4} | 3.4 | 12 | 12 | 1.7 | 9 | 22 | 34 | 1.5 | 33 | 37 |
c_{3}.f_{5} | 2 | 6 | 2 | 1.48 | 6 | 6 | 4 | 1.48 | 9 | 4 |
c_{3}.f_{6} | 2.28 | 8 | 5 | 1.13 | 12 | 6 | 16 | 1.12 | 17 | 16 |
c_{3}.f_{7} | 281.2 | 48 | 72 | 233.7 | 36 | 40 | 66 | 192.9 | 67 | 131 |
c_{3}.f_{8} | 2 | 12 | 5 | 1 | 14 | 9 | 22 | 1.39 | 19 | 22 |
6.4 Comparison of the design cost by the three methods
To compare the design cost, the system area by the three methods, the fractional word lengths are obtained by the precise analysis in [11]. Typically, we select the case of a random function of case 3, c_{3}.f_{3}, for this section. The design of c_{3}.f_{3} is synthesized on Xilinx Xc2vp30-7ff896 FPGA device (Xilinx, San Jose, CA, USA).
7 Conclusions
This paper presents a novel affine approximation method for multiplication, Approximation Affine based on Space Extreme Estimation. In this method, an extra noise symbol is added to an approximated affine form.
To reduce the uncertainty in AA, we derive this method in the (n + 1)-dimensional space E^{ n + 1 }. In space E^{ n + 1 }, approximate affine form can be regarded as the tangent hyperplane at a certain point of (n + 1)-dimensional space curved surface. Using the linear geometry, it is proven that the f_{ z } of AASEE is the closest to the result of multiplication among all the possible approximate affine forms. Taking ε_{ i } as the input arguments, all the same noise symbols of different variables are taken into account together. Hence, the uncertainty of $\widehat{d}$ of AASEE is reduced. Based on the extreme value theory of multivariable functions, we can prove that the range of this $\widehat{d}$ covers the true range of the difference introduced by approximation and much tighter than that by AATRE and AACHA.
The uncertainty in AASEE is much smaller than that in AATRE and AACHA on average. At the same time, the computational complexity of AASEE is the same as that of AATRE and lower than that of AACHA.
In the case studies, the accuracy of AASEE is 1.69 times than that of AATRE and 1.12 times than that of AACHA on average. The integer word length, which is derived by AASEE, is 2 b less than that by AATRE and 1 b less than that by AACHA, at most. For the case of c_{3}.f_{3}, the area, which is computed by AASEE, is less than that by AATRE and AACHA, and the percentage area saving of AASEE over AATRE is from 14.34% to 5.62% with the target precision increased.
Declarations
Authors’ Affiliations
References
- Constantinides G, Woeginger G: The complexity of multiple wordlength assignment. Appl. Math. Lett 2002, 15(2):137-140. 10.1016/S0893-9659(01)00107-0MATHMathSciNetView ArticleGoogle Scholar
- Cmar R, Rijnders L, Schaumont P, Vernalde S, Bolsens I: A methodology and design environment for DSP ASIC fixed point refinement. In Proceedings of Design, Automation and Test in Europe. Munich: IEEE Computer Society; 09–12 March 1999:271-276.Google Scholar
- Kum K, Sung W: Combined word-length optimization and high level synthesis of digital signal processing systems. IEEE Trans. Computer-Aided Design Integr. Circuits Syst 2001, 20(8):921-930. 10.1109/43.936374View ArticleGoogle Scholar
- Roy S, Banerjee P: An algorithm for trading off quantization error with hardware resources for MATLAB-based FPGA design. IEEE Trans. Comput 2005, 54(7):886-896. 10.1109/TC.2005.106View ArticleGoogle Scholar
- Mallik A, Sinha D, Zhou H: Low-power optimization by smart bit-width allocation in a SystmC-based ASIC design environment. IEEE Trans. Computer-Aided Design Integr. Circuits Syst 2007, 26(3):447-455.View ArticleGoogle Scholar
- Caffarena G, Carreras C, Lopez JA: SQNR estimation of fixed-point DSP algorithms. Eurasip J. Adv. Signal Process 2010, 21: 1-12.Google Scholar
- Banciu A, Casseau E, Menard D: Stochastic modeling for floating-point to fixed-point conversion. In Proceedings of IEEE Workshop on Signal Processing Systems (SiPS). Beirut: IEEE Computer Society; 4–7 October 2011:180-185.Google Scholar
- Fang CF, Rutenbar R, Puschel M, Chen T: Toward efficient static analysis of finite-precision effects in DSP applications via affine arithmetic modeling. In Proceedings of Design Automation Conference, Institute of Electrical and Electronics Engineers Inc.. Anaheim; 2–6 June 2003:496-501.Google Scholar
- Fang CF, Rutenbar R: Fast, accurate static analysis for fixed-point finite-precision effects in DSP designs. In Proceedings of International Conference on Computer-Aided Design, Institute of Electrical and Electronics Engineers Inc.. San Jose; 9–13 November 2003:275-282.Google Scholar
- Pu Y, Ha Y: An automated, efficient and static bit-width optimization methodology towards maximum bit-width-to-error tradeoff with affine arithmetic model. In Proceedings of Asia and South Pacific Design Automation Conference, Institute of Electrical and Electronics Engineers Inc.. Yokohama; 24–27 January 2006:886-891.Google Scholar
- Lee DU, Gaffar AA, Cheung RC, Mencer O, Luk W, Constantinides GA: Accuracy guaranteed bit-width optimisation. IEEE Trans. Computer-Aided Design Integr. Circuits Syst 2006, 25(10):1990-2000.View ArticleGoogle Scholar
- Osborne WG, Coutinho JGF, Luk W, Mencer O: Instrumented multi-stage word-length optimization. In Proceedings of IEEE International Conference on Field-Programmable Technology, Institute of Electrical and Electronics Engineers Inc.. Kitakyushu; 12–14 December 2007:89-96.Google Scholar
- Lopez JA, Carreras C, Nieto-Taladriz O: Improved interval-based characterization of fixed-point LTI systems with feedback loops. IEEE Trans. Computer-Aided Design Integr. Circuits Syst 2007, 2(11):1923-1933.View ArticleGoogle Scholar
- Zhang L, Zhang Y, Zhou W: Tradeoff between approximation accuracy and complexity for range analysis using affine arithmetic. J. Signal Process. Syst 2010, 61(3):279-291. 10.1007/s11265-010-0452-2View ArticleGoogle Scholar
- Sarbishei O, Radecka K, Zilic Z: Analytical optimization of bit-widths in fixed-point LTI systems. IEEE Trans. Computer-Aided Design Integr. Circuits Syst 2012, 31(3):343-355.View ArticleGoogle Scholar
- Rocher R, Menard D, Scalart P: Analytical approach for numerical accuracy estimation of fixed-point systems based on smooth operations. IEEE Trans. Circuits Syst. I, Reg. Papers 2012, 59(10):2326-2339.MathSciNetView ArticleGoogle Scholar
- Kinsman AB, Nicolici N: Bit-width allocation for hardware accelerators for scientific computing using SAT-modulo theory. IEEE Trans. Computer-Aided Design Integr. Circuits Syst 2010, 29(3):406-413.View ArticleGoogle Scholar
- Kinsman AB, Nicolici N: Computational vector-magnitude-based range determination for scientific abstract data types. IEEE Trans. Comput 2011, 60(11):1652-1663.MathSciNetView ArticleGoogle Scholar
- Wadekar SA, Parker AC: Accuracy sensitive word-length selection for algorithm optimization. In Proceedings of the International Conference on Computer Design: VLSI in Computers and Processors, 1998. ICCD ‘98, Institute of Electrical and Electronics Engineers Inc.. Austin; 5–7 October 1998:54-61.Google Scholar
- Carreras C, Lopez JA, Nieto-Taladriz O: Bit-width selection for data-path implementations. In Proceedings of the 12th International Symposium on System Synthesis, 1999. Boca Raton: IEEE Computer Society; 1–4 November 1999:114-119.Google Scholar
- Comba JLD, Stolfi J: Affine arithmetic and its applications to computer graphics. In Proceedings of SIBGRAPI’93 - VI Simposio Brasileiro de Computacao Grafica e Processamento de Imagens. Recife: IEEE Computer Society; 20–22 October 1993:9-18.Google Scholar
- Stolfi J, de Figueiredo (eds) LH: Affine arithmetic. In Self-Validated Numerical Methods and Applications. Brazil: Monograph for 21st Brazilian Mathematics Colloquium, IMPA, Rio de Janeiro; 1997:70-74.Google Scholar
- Huang K, Yee H: Improved tangent hyperplane method for transient stability studies [of power systems]. In Proceedings of APSCOM-91 Conference, Institution of Electrical Engineers. Hong Kong; 5–8 November 1991:363-366.Google Scholar
- Eivind E, Gustavsen TS: GRA6035 Mathematics. Oslo: BI Norwegian Business School; 2010.Google Scholar
- Moore R: Interval Analysis. New Jersey: Prentice-Hall; 1966.MATHGoogle Scholar
- Pang Y, Radecka K: An efficient algorithm of performing range analysis for fixed-point arithmetic circuits based on SAT checking. In Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS). Rio de Janeiro: IEEE Computer Society; 15–18 May 2011:1736-1739.Google Scholar
- Shekhar N, Kalla P, Enescu F: Equivalence verification of arithmetic datapaths with multiple word-length operands. In Proceedings of Design, Automation and Test in Europe. Munich: IEEE Computer Society; 6–10 March 2006:824-829.Google Scholar
- Gopalakrishnan S, Kalla P, Meredith MB, Enescu F: Finding linear building-blocks for RTL synthesis of polynomial datapaths with fixed-size bit-vectors. In Proceedings of International Conference on Computer-Aided Design, Institute of Electrical and Electronics Engineers Inc.. San Jose; 5–8 November 2007:143-148.Google Scholar
- Shou H, Song W, Shen J, Martind R, Wang G: A recursive Taylor method for ray-casting algebraic surfaces. In Proceedings of International Conference on Computer Graphics and Virtual Reality. Las Vegas: CSREA Press; 26–29 June 2006:196-204.Google Scholar
- Jiang J, Luk W, Rueckert D: FPGA-based computation of free-form deformations in medical image registration. In Proceedings of IEEE International Conference on Field-Programmable Technology 2003. Tokyo: IEEE Computer Society; 15–17 December 2003:234-241.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.