An active noise control algorithm with gain and power constraints on the adaptive filter
© Kozacky and Ogunfunmi; licensee Springer. 2013
Received: 2 May 2012
Accepted: 21 January 2013
Published: 11 February 2013
This article develops a new adaptive filter algorithm intended for use in active noise control systems where it is required to place gain or power constraints on the filter output to prevent overdriving the transducer, or to maintain a specified system power budget. When the frequency-domain version of the least-mean-square algorithm is used for the adaptive filter, this limiting can be done directly in the frequency domain, allowing the adaptive filter response to be reduced in frequency regions of constraint violation, with minimal effect at other frequencies. We present the development of a new adaptive filter algorithm that uses a penalty function formulation to place multiple constraints on the filter directly in the frequency domain. The new algorithm performs better than existing ones in terms of improved convergence rate and frequency-selective limiting.
Active noise control (ANC) systems can be used to remove interference by generating an anti-noise output that can be used in the system to destructively cancel the interference . In some applications, it is required to limit the maximum output level to prevent overdriving the transducer, or to maintain a specified system power budget. In a frequency-domain implementation of the least-mean-square (LMS) algorithm, the limiting constraints can be placed directly in the frequency domain, allowing the adaptive filter response to be reduced in the frequency regions of constraint violation, with minimal effect at other frequencies . Constraints can be placed on either the filter gain, or filter output power, as appropriate for the application.
Applications of gain-constrained adaptive systems include systems that use a microphone for feedback, and due to the acoustic path to the microphone notches or peaks occur in the microphone frequency response (which may not be present in other locations). Adding gain constraints to the adaptive filter prevents distortion at those frequencies by limiting the peak magnitude of the filter coefficients . Applications of power-constrained adaptive systems include requirements to limit the maximum power delivered to S(z) to a predetermined constraint value to prevent overdriving the transducer, prevent output amplifier saturation, or prevent other nonlinear behavior . The primary difference between these implementations is that the gain-constrained algorithm does not take the input power into account when determining the constraint violation.
Previous implementations of gain and output power limiting include output rescaling, the leaky LMS, and a class of algorithms termed constrained steepest descent (CSD) previously presented in . We develop a new class of gain-constrained and power-constrained algorithms termed constrained minimal disturbance (CMD). The new CMD algorithms provide faster convergence compared to previous algorithms, and the ability to handle multiple constraints.
This article is organized as follows. Section 2 presents a review of prior work. Section 3 presents the CMD algorithm development. Section 4 presents a convergence analysis. Section 5 presents simulations with comparisons to other algorithms. Section 6 provides some concluding remarks.
2. Review of prior work
For comparison purposes, the following notation is used.n Adaptive filter size and block size N Sample number in the time domain m Block number in the time or frequency domain W Weight in the frequency domain X Input in the frequency domain E Error in the frequency domain D Plant output in the frequency domain Y Filter output in the frequency domain C Gain or power constraint S Secondary path
Lowercase w, x, e, d, and y are the time-domain representations of their respective frequency-domain counterparts. Vectors will be denoted in boldface, and the subscript k is used to denote an individual component of a vector. The superscript * is used to denote complex conjugate, and the superscript T denotes vector transpose. The parameter μ is used as a convergence step-size coefficient, and the parameter γ is used as a leakage coefficient.
Here, in addition to the output being clipped, the adaptive filter weights are also rescaled; filter adaptation continues from the appropriate weight value corresponding to the actual output.
The leaky LMS reduces the filter gain each iteration, with the leakage coefficient γ controlling the rate of reduction. The coefficient γ is determined experimentally according to the application, but gain reduction occurs at all frequencies, resulting in a larger steady-state convergence error. When the leakage is zero, this algorithm reduces to the standard LMS .
where 0 is the N-point zero vector.
where γ is the leakage factor.
where the z subscript sets the result in the brackets to 0 if the value in the brackets is less than 0 (the constraint is satisfied), or to the value of the difference (the constraint is violated). The constraint is individually applied to each frequency bin. Here, α controls the “tightness” of the penalty: a larger α places a stiffer penalty on constraint violation at the expense of a larger steady-state convergence error.
Note that in (10) there is only one constraint. When used for comparison purposes, we will denote (8) and (9) as constrained steepest descent (CSD) algorithms.
3. New algorithm development
The new CMD algorithm will be developed using the principle of minimal disturbance, which states that the weight vector should be changed in a minimal manner from one iteration to the next . A constraint is added for filter convergence, and a constraint is also added for either the filter gain (coefficient’s magnitude in each frequency bin), or the filter output power, depending on which we intend to limit. The method of Lagrange multipliers [11, 12] is then used to solve this constrained optimization problem .
3.1. Gain-constrained algorithm
where the z subscript term will force α k to zero if the constraint of (14) is satisfied.
is a diagonal matrix of variable leakage factors as determined by (28).
where β is a smoothing constant slightly less than 1. (Note: In equations such as (31) which use an estimated power value in the denominator, low power in a particular frequency bin may result in division by a very small number, potentially causing numerical instability. To guard against this, a small positive regularization parameter is added to the denominator to ensure numerical stability ).
If the constraint of (14) is violated, the CMD algorithm will reduce the magnitude of the adaptive filter frequency response in proportion to the level of constraint violation.
The CMD algorithm normalizes the weight update in a manner similar to the normalized-LMS with leakage. The amount of leakage is dependent on the level of constraint violation.
The CMD algorithm scales the weight update by the inverse of secondary path frequency response, resulting in a faster convergence in regions corresponding to valleys (low magnitude response) in the secondary path.
3.2. Power-constrained algorithm
Following a development similar to the gain-constrained case results in the CMD algorithm given by (31) using a new diagonal matrix of leakage factors (32).
4. Convergence analysis
We assume that all signals are white, zero-mean, Gaussian wide-sense stationary, and employ the independence assumption  under a steady-state condition, where the constraint violation is constant and the transform-domain weights are mutually uncorrelated (which occurs as the filter size N grows large ). We will also use a normalized input power of unity in (34), which then allows the analysis to apply to both gain-constrained and power-constrained cases. Uncorrelated white measurement noise with a variance of σ n 2 will be denoted by η k .
4.1. Mean value
4.2. Convergence in the mean
4.3. Convergence in the mean square
In the simulations, the experimental data from  is used for the plant, modeled by a 512-term all-zero filter centered at N/2. The output rescaling algorithm (2) is applied in the frequency-domain to determine the steady-state adaptive filter final coefficients. We demonstrate the improved convergence performance of the CMD algorithm as compared to the CSD algorithm and the leaky LMS in both gain-constrained and power-constrained applications. The values of constraint terms C, α maxk , and C k are held constant in the simulations, but could be shaped over frequency for specific applications. External uncorrelated Gaussian white noise with a variance of 0.01 is added for the convergence comparisons, and an average of 100 runs is plotted. In the simulations, we are assuming prior knowledge of the secondary path transfer function; methods for on-line and off-line secondary path identification are presented in, e.g., [20, 21].
5.1. Gain-constrained algorithm
5.2. Power-constrained algorithm
A new algorithm was presented, the CMD LMS, for gain-constrained and power-constrained adaptive filter applications. Analysis results were developed for the stability bounds in the mean and mean-square sense. The CMD algorithm was compared to the algorithm developed in  and the leaky LMS for filtered-X ANC applications. The new CMD algorithm provides faster convergence and improved frequency response performance, especially in colored noise environments. Additionally, the new CMD algorithm has the ability to handle multiple constraints in both gain-constrained and power-constrained applications.
- Nelson PA, Elliott SJ: Active Control of Sound. Academic Press, London; 1992.Google Scholar
- Rafaely B, Elliot S: A computationally efficient frequency-domain LMS algorithm with constraints on the adaptive filter. IEEE Trans. Signal Process 2000, 48(6):1649-1655. 10.1109/78.845922View ArticleGoogle Scholar
- Kuo SM, Morgan DR: Active Noise Control Systems: Algorithms and DSP Implementations. Wiley, New York; 1996.Google Scholar
- Taringoo F, Poshtan J, Kahaei MH: Analysis of effort constraint algorithm in active noise control systems. EURASIP J. Appl. Signal Process 2006, 2006: 1-9.View ArticleGoogle Scholar
- Qiu X, Hansen CH: A study of time-domain FXLMS algorithms with control output constraint. J. Acoust. Soc. Am 2001, 1097(6):2815-2823.View ArticleGoogle Scholar
- Darlington P: Performance surfaces of minimum effort estimators and controllers. IEEE Trans. Signal Process 1995, 43(2):536-539. 10.1109/78.348136View ArticleGoogle Scholar
- Widrow B: SD Stearns: Adaptive Signal Processing. Prentice-Hall, Upper Saddle River, NJ; 1985.Google Scholar
- Nowak MP, Van Veen BD: A constrained transform-domain adaptive IIR filter structure for active noise control. IEEE Trans. Speech Audio Process 1997, 5(5):334-347.View ArticleGoogle Scholar
- Morgan DR, Thi JC: A delayless subband adaptive filter architecture. IEEE Trans. Signal Process 1995, 43(8):1819-1830. 10.1109/78.403341View ArticleGoogle Scholar
- Shynk JJ: Frequency-domain and multirate adaptive filtering. IEEE Signal Process. Mag 1993, 9: 337-339.Google Scholar
- Haykin S: Adaptive Filter Theory. Prentice-Hall, Upper Saddle River, NJ; 2002.Google Scholar
- Fletcher R: Practical Methods of Optimization. Wiley, New York; 1987.MATHGoogle Scholar
- Kozacky WJ, Ogunfunmi T: Convergence analysis of a frequency-domain adaptive filter with constraints on the output weights. In Proceedings of the Asilomar Conference on Signals, Systems, and Computers. Pacific Grove, USA; 2009:1350-1355.Google Scholar
- Elliott SJ, Beck KH: Effort constraints in adaptive feedforward control. IEEE Signal Process. Lett 1996, 3(1):7-9.View ArticleGoogle Scholar
- Sommen PCW, Van Gerwen PJ, Kotmans HJ, Janssen JEM: Convergence analysis of a frequency-domain adaptive filter with exponential power averaging and generalized window function. IEEE Trans. Circuits Syst 1987, 34(7):788-798. 10.1109/TCS.1987.1086205View ArticleGoogle Scholar
- Farhang-Boroujeny B, Chan KS: Analysis of the frequency-domain block LMS algorithm. IEEE Trans. Signal Process 2000, 48(8):2332-2342. 10.1109/78.852014View ArticleGoogle Scholar
- Mayyas K, Aboulnast T: Leaky LMS algorithm: MSE analysis for Gaussian data. IEEE Trans. Signal Process 1997, 45(4):927-934. 10.1109/78.564181View ArticleGoogle Scholar
- Douglas SC: Performance comparison of two implementations of the leaky LMS adaptive filter. IEEE Trans. Signal Process 1997, 45(8):2125-2129. 10.1109/78.611231View ArticleGoogle Scholar
- Mayyas K, Aboulnasr T: Leaky LMS: a detailed analysis, in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) . Seattle, USA 1995, 2: 1255-1258.Google Scholar
- Kuo SM, Vijayan D: A secondary path modeling technique for active noise control systems. IEEE Trans. Speech Audio Process 1997, 5(4):374-377. 10.1109/89.593319View ArticleGoogle Scholar
- Akhtar MT, Abe M, Kawamata M: On active noise control systems with online acoustic feedback path modeling. IEEE Trans. Audio Speech Lang. Process 2007, 15(2):593-600.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.