Skip to main content

An active noise control algorithm with gain and power constraints on the adaptive filter

Abstract

This article develops a new adaptive filter algorithm intended for use in active noise control systems where it is required to place gain or power constraints on the filter output to prevent overdriving the transducer, or to maintain a specified system power budget. When the frequency-domain version of the least-mean-square algorithm is used for the adaptive filter, this limiting can be done directly in the frequency domain, allowing the adaptive filter response to be reduced in frequency regions of constraint violation, with minimal effect at other frequencies. We present the development of a new adaptive filter algorithm that uses a penalty function formulation to place multiple constraints on the filter directly in the frequency domain. The new algorithm performs better than existing ones in terms of improved convergence rate and frequency-selective limiting.

1. Introduction

Active noise control (ANC) systems can be used to remove interference by generating an anti-noise output that can be used in the system to destructively cancel the interference [1]. In some applications, it is required to limit the maximum output level to prevent overdriving the transducer, or to maintain a specified system power budget. In a frequency-domain implementation of the least-mean-square (LMS) algorithm, the limiting constraints can be placed directly in the frequency domain, allowing the adaptive filter response to be reduced in the frequency regions of constraint violation, with minimal effect at other frequencies [2]. Constraints can be placed on either the filter gain, or filter output power, as appropriate for the application.

A general block diagram of an ANC system is illustrated in Figure 1, with H(z) representing the primary path or plant (e.g., an acoustic duct), W(z) representing the adaptive filter, and S(z) representing the secondary path (which may include the D/A converter, power output amplifier, and the transducer). The adaptive filter W(z) typically uses the filtered-X LMS algorithm, where the input to the LMS algorithm is first filtered by an estimate of the secondary path [3]. The adaptive filter will need to simultaneously identify H(z) and equalize S(z), with the additional constraint of limiting the maximum level delivered to S(z).

Figure 1
figure 1

Block diagram of the ANC system.

Applications of gain-constrained adaptive systems include systems that use a microphone for feedback, and due to the acoustic path to the microphone notches or peaks occur in the microphone frequency response (which may not be present in other locations). Adding gain constraints to the adaptive filter prevents distortion at those frequencies by limiting the peak magnitude of the filter coefficients [2]. Applications of power-constrained adaptive systems include requirements to limit the maximum power delivered to S(z) to a predetermined constraint value to prevent overdriving the transducer, prevent output amplifier saturation, or prevent other nonlinear behavior [4]. The primary difference between these implementations is that the gain-constrained algorithm does not take the input power into account when determining the constraint violation.

Previous implementations of gain and output power limiting include output rescaling, the leaky LMS, and a class of algorithms termed constrained steepest descent (CSD) previously presented in [2]. We develop a new class of gain-constrained and power-constrained algorithms termed constrained minimal disturbance (CMD). The new CMD algorithms provide faster convergence compared to previous algorithms, and the ability to handle multiple constraints.

This article is organized as follows. Section 2 presents a review of prior work. Section 3 presents the CMD algorithm development. Section 4 presents a convergence analysis. Section 5 presents simulations with comparisons to other algorithms. Section 6 provides some concluding remarks.

2. Review of prior work

For comparison purposes, the following notation is used.n Adaptive filter size and block size N Sample number in the time domain m Block number in the time or frequency domain W Weight in the frequency domain X Input in the frequency domain E Error in the frequency domain D Plant output in the frequency domain Y Filter output in the frequency domain C Gain or power constraint S Secondary path

Lowercase w, x, e, d, and y are the time-domain representations of their respective frequency-domain counterparts. Vectors will be denoted in boldface, and the subscript k is used to denote an individual component of a vector. The superscript * is used to denote complex conjugate, and the superscript T denotes vector transpose. The parameter μ is used as a convergence step-size coefficient, and the parameter γ is used as a leakage coefficient.

The first two methods of power limiting were described in detail in [5] and are briefly restated here. The first “output clipping” simply limits the output power to a maximum value. This is what would normally happen in a real system (e.g., the output amplifier would saturate). With the filter output y at iteration n denoted by y(n) and the output constraint by C, the output clipping algorithm is given by

if y n + 1 > C y n + 1 = y n + 1 C y n + 1 .
(1)

A potential problem in using output clipping for adaptive filtering applications is that the weight updates for w(n) continue to occur while the filter output remains clipped, causing potential stability problems since the filter weight update is decoupled from the filter output. To prevent this, the ”output re-scaling” algorithm can be used, which is given by

if y n + 1 > C y n + 1 = y n + 1 C y n + 1 w n + 1 = w n + 1 C y n + 1 .
(2)

Here, in addition to the output being clipped, the adaptive filter weights are also rescaled; filter adaptation continues from the appropriate weight value corresponding to the actual output.

The next algorithm to be considered for gain or power limiting is the leaky LMS [6], which is given by

w n + 1 = 1 μγ w n + μe n x n .
(3)

The leaky LMS reduces the filter gain each iteration, with the leakage coefficient γ controlling the rate of reduction. The coefficient γ is determined experimentally according to the application, but gain reduction occurs at all frequencies, resulting in a larger steady-state convergence error. When the leakage is zero, this algorithm reduces to the standard LMS [7].

The algorithms described thus far are processed directly in the time domain. However, with large filter lengths the required convolutions become computationally expensive, and alternative methods can be more efficient. If the processing is done in block form and a fast Fourier transform (FFT) used, the required convolutions become multiplications. This also allows additional constraints to be added to limit the filter response directly in the frequency domain. For example, in [8], an ANC system using a loudspeaker with poor low-frequency response was stabilized in the FFT domain by zeroing out the low-frequency components, preventing adaptation at those frequencies. However, using block processing will result in a one block delay, which may be undesirable in some real-time applications. A delayless structure [9], with filtering in the time-domain and signal processing in the frequency-domain, can be used to mitigate this delay. A block diagram of the delayless frequency-domain LMS (FDLMS) is shown in Figure 2. In delayless ANC applications with a secondary path S(z), the adaptive filter input vector x(m) is first filtered by an estimate of the secondary path. The adaptive filter weight, input, and error vectors are defined as

w m = w 0 n w 1 n w N 1 n T x m = x n x n 1 x n N + 1 T e m = e n e n 1 e n N + 1 T
Figure 2
figure 2

Block diagram of the delayless ANC system with frequency-domain processing.

A block size of N is used for both the filter and each new set of data to maximize computational efficiency, with m representing the block iteration. To avoid circular convolution effects, each FFT uses blocks of size 2N[10]. The frequency-domain input and error vectors (size 2N) are defined as

X m = FFT x T n N x T n T E m = FFT 0 e T n T ,
(5)

where 0 is the N-point zero vector.

The delayless FDLMS weight update equation at iteration m without a gain or power constraint is given by [9]

w m + 1 = w m + μ IFFT X * m E m + ,
(6)

where the + subscript denotes the causal part of the IFFT (corresponding to the gradient constraint in [10]), and μ is the convergence coefficient. Adding a leakage factor to (6) results in a frequency-domain version of the leaky LMS which can be used to limit the adaptive filter output [2], and is given by

w m + 1 = γ w m + μ IFFT X * m E m +
(7)

where γ is the leakage factor.

The next two weight update equations were developed in [2], which processes the constraints in the frequency domain using an algorithm based on the method of steepest descent. The delayless form of the gain-constrained version is given by

w m + 1 = w m + μ IFFT X * m E m 4 αN W m 2 C ) z W m +
(8)

where the z subscript sets the result in the brackets to 0 if the value in the brackets is less than 0 (the constraint is satisfied), or to the value of the difference (the constraint is violated). The constraint is individually applied to each frequency bin. Here, α controls the “tightness” of the penalty: a larger α places a stiffer penalty on constraint violation at the expense of a larger steady-state convergence error.

The delayless form of the power-constrained algorithm is given by

w m + 1 = w m + μ IFFT X * m E m 4 α P m C Z X m 2 W m + .
(9)

The output power P(m) is determined by the squared Euclidean norm of the filter output, which is required to be limited to a constraint value C, or equivalently

P m = y m 2 < C .
(10)

Note that in (10) there is only one constraint. When used for comparison purposes, we will denote (8) and (9) as constrained steepest descent (CSD) algorithms.

3. New algorithm development

The new CMD algorithm will be developed using the principle of minimal disturbance, which states that the weight vector should be changed in a minimal manner from one iteration to the next [11]. A constraint is added for filter convergence, and a constraint is also added for either the filter gain (coefficient’s magnitude in each frequency bin), or the filter output power, depending on which we intend to limit. The method of Lagrange multipliers [11, 12] is then used to solve this constrained optimization problem [13].

3.1. Gain-constrained algorithm

At each block update m, the new algorithm will minimize the squared Euclidean norm of the frequency-domain weight change in each individual frequency bin k, where the weight change is given by

δ W k m + 1 = W k m + 1 W k m ,
(11)

subject to the condition of a posteriori filter convergence in the frequency domain

D k m = S k m W k m + 1 X k m .
(12)

In gain-constrained applications, the algorithm will additionally add a penalty based on the amount of magnitude violation above a maximum constraint value, requiring

W k m + 1 C k
(13)

or equivalently

W k m + 1 2 C k .
(14)

The three requirements given by (11), (12), and (14) are combined into a single cost function, written as

J m + 1 = δ W k m + 1 2 + Re λ * D k m S k m W k m + 1 X k m + α max , k W k m + 1 2 C k 2 z
(15)

where the Lagrange multiplier λ controls the convergence requirement of (12), and the Lagrange multiplier αmax,k with subscript k controls the individual frequency bin magnitude constraint; parameter αmax,k controls the “tightness” of the penalty term, with a larger value placing more emphasis on meeting the constraint at the expense of increasing the convergence error [2]. The cost function (15) is differentiated with respect to each of the three variables and set to 0. For each frequency bin k

J m + 1 W k m + 1 = 2 W k m + 1 W k m * λ * S k m X k m + 2 α k W k * m + 1 W k m + 1 2 C k z = 0
(16)
J m + 1 λ * = D k m S k m W k m + 1 X k m = 0
(17)
J m + 1 α max , k = W k m + 1 2 C k 2 z = 0.
(18)

Rearranging (16) gives

1 + 2 α max , k W k m + 1 2 C k z W k * m + 1 = W k * m + 1 2 λ * S k m X k m .
(19)

We now propose the following interpretation of the gain-constraint term. In steady state (after convergence), we would expect the successive weight values to be approximately the same for a small convergence coefficient step size. Therefore, as long as the constraint of (14) was satisfied in the previous iteration, the penalty is set to 0. However, if the magnitude of the filter weight exceeds the constraint value, then the penalty is scaled in proportion to the constraint violation (similar to the method in [14], which initiates the penalty at 90% of the constraint). We define

α k = 2 α max , k W k m 2 C k z
(20)

where the z subscript term will force α k to zero if the constraint of (14) is satisfied.

Substituting (20) into (19) at frequency bin k, conjugating both sides, and rearranging into a recursion results in

W k m + 1 = 1 1 + α k W k m + 1 2 λ S k * m X k * m .
(21)

Substituting (21) into (12) gives

D k m S k 1 1 + α k W k m + 1 2 λ S k * m X k * m X k m = 0.
(22)

Rearranging (22) yields

D k m S k m W k m X k m + α k D k m 1 2 λ S k m 2 X k m 2 = 0.
(23)

The first term in brackets is the error at frequency bin k, E k (m). Solving for λ results in

λ = 2 E k m + α k D k m S k m 2 X k m 2 .
(24)

Rearranging (21) into a recursion, using (24), and introducing a convergence step size parameter μ to control the rate of adaptation yields

W k m + 1 = 1 1 + α k W k m + μ E k m + α k D k m S k m 2 X k m S k * m X k * m .
(25)

Noting that D k (m) in (25) can be written as D k (m) = E k (m) + S k W k (m)X k (m) results in

W k m + 1 = 1 + μ α k 1 + α k W k m + μ S k m 2 X k m 2 S k * m X k * m E k m .
(26)

For small μ, (26) can be approximated as

W k m + 1 = 1 1 + α k W k m + μ S k m 2 X k m 2 S k * m X k * m E k m .
(27)

Using the definitions

γ k = α k μ 1 + α k
(28)

and

μ k = μ S k m 2 X k m 2
(29)

the weight update given by (27) can be written for each frequency bin as

W k m + 1 = 1 μ γ k W k m + μ k S k * m X k * m E k m .
(30)

Taking the IFFT of both sides and casting into a delayless structure results in the new CMD algorithm given by

w m + 1 = w m + μ IFFT S m X * m E m S m 2 X m 2 Γ m W m +
(31)

where

Γ m = diag γ 0 m , γ 1 m , , γ 2 N 1 m
(32)

is a diagonal matrix of variable leakage factors as determined by (28).

The ║X(m)║2 term provides an estimate of the input power P x,k (m) in frequency bin k,

P x , k m = E X k m 2
(33)

which can be determined recursively by [15]

P x , k m = β P x , k m 1 + 1 β X k m X k * m
(34)

where β is a smoothing constant slightly less than 1. (Note: In equations such as (31) which use an estimated power value in the denominator, low power in a particular frequency bin may result in division by a very small number, potentially causing numerical instability. To guard against this, a small positive regularization parameter is added to the denominator to ensure numerical stability [11]).

The following observations can be made of the CMD algorithm given by (31), which is shown in Figure 3:

Figure 3
figure 3

Block diagram of the CMD adaptive filter with frequency-domain processing.

  1. 1.

    If the constraint of (14) is violated, the CMD algorithm will reduce the magnitude of the adaptive filter frequency response in proportion to the level of constraint violation.

  2. 2.

    The CMD algorithm normalizes the weight update in a manner similar to the normalized-LMS with leakage. The amount of leakage is dependent on the level of constraint violation.

  3. 3.

    The CMD algorithm scales the weight update by the inverse of secondary path frequency response, resulting in a faster convergence in regions corresponding to valleys (low magnitude response) in the secondary path.

3.2. Power-constrained algorithm

In applications where the filter output power is to be limited, the gain coefficient constraint is replaced by an output power constraint. If total control effort is to be limited [2, 6], a single output power constraint can be expressed as

P y m + 1 C
(35)

where

P y m + 1 = 1 N k = 0 N 1 W k m + 1 2 X k m 2 .
(36)

The new power constrained cost function then becomes

J m + 1 = δ W k m + 1 2 + Re λ * D k m S k m W k m + 1 X k m + α max , k P y m + 1 C 2 z .
(37)

Following the development of the gain-constrained algorithm, this cost function is differentiated with respect to each of the three variables and set to 0. The resulting equations are

J m + 1 W k m + 1 = 2 W m + 1 W k m * λ * S k m X k m + 2 α k W * m + 1 X k m 2 P y m + 1 C z = 0
(38)
J m + 1 λ * = D k m S k m W k m + 1 X k m = 0
(39)
J m + 1 α max , k = P y m + 1 C 2 z = 0.
(40)

Rearranging (38) yields

2 α max , k X k m 2 P y m + 1 C z W k * m + 1 + W k * m + 1 = W k * m + 1 2 λ * S k m X k m .
(41)

Using the same procedure previously described after (19), the term in (20) is replaced by

α k = 2 α max , k X k m 2 P y , k m C k z .
(42)

Following a development similar to the gain-constrained case results in the CMD algorithm given by (31) using a new diagonal matrix of leakage factors (32).

Better frequency performance can be achieved by estimating the power in each frequency bin, making the algorithm more selective in attenuating those frequencies in violation of the constraint. The output power in each frequency bin is determined by

P y , k m + 1 = W k m + 1 2 P x , k m + 1 .
(43)

Using C k as the power constraint, it is required that

P y , k m + 1 C k .
(44)

The resulting cost function is given by

J m + 1 = δ W k m + 1 2 + Re λ * D k m S k m W k m + 1 X k m + α max , k P y , k m + 1 C k 2 z .
(45)

Following the development of the gain-constrained algorithm, this cost function is differentiated with respect to each of the three variables and set to 0. The resulting equations are

J m + 1 W k m + 1 = 2 W k m + 1 W k m * λ * S k m X k m + 2 α k W k * m + 1 X k m 2 P y , k m + 1 C k z = 0
(46)
J m + 1 λ * = D k m S k m W k m + 1 X k m = 0
(47)
J m + 1 α max , k = P y , k m + 1 C k 2 z = 0.
(48)

Rearranging (46) yields

2 α max , k X k m 2 P y , k m + 1 C k z W k * m + 1 + W k * m + 1 = W k * m + 1 2 λ * S k m X k m .
(49)

Using the same procedure previously described after (19), the term in (20) is replaced by

α k = 2 α max , k X k m 2 P y , k m C k z .
(50)

Following a development similar to the gain-constrained case, and using a new diagonal matrix of leakage factors (32) results in the CMD algorithm, repeated below.

w m + 1 = w m + μ IFFT S * m X * m E m S m 2 X m 2 Γ m W m + ,
(51)

where

Γ m = diag γ 0 m , γ 1 m , , γ 2 N 1 m .
(52)

4. Convergence analysis

We assume that all signals are white, zero-mean, Gaussian wide-sense stationary, and employ the independence assumption [7] under a steady-state condition, where the constraint violation is constant and the transform-domain weights are mutually uncorrelated (which occurs as the filter size N grows large [16]). We will also use a normalized input power of unity in (34), which then allows the analysis to apply to both gain-constrained and power-constrained cases. Uncorrelated white measurement noise with a variance of σ n 2 will be denoted by η k .

4.1. Mean value

The weight update equation (30) can be written as

W k m + 1 = 1 μ γ k W k m + μ k D k m S k m W k m X k m S k * m X k * m
(53)

or equivalently

W k m + 1 = 1 μ γ k + 1 W k m + μ W k , opt ,
(54)

where W k,opt denotes the optimal Wiener solution [given as S k –1 (m)D k (m)]. Taking expectations of both sides, using the assumptions, and noting that the input power is normalized per (29) results in

E W k m + 1 = 1 μ γ k + 1 E W k m + μ W k , opt
(55)

By induction, this recursion can be written as

E W k m = 1 μ γ k + 1 m E W k 0 + μ W k , opt i = 0 m 1 1 μ γ k + 1 m 1 i
(56)

Convergence requirements on μ are given below. When these conditions are satisfied the result is

lim m E W k m = μ W k , opt lim m i = 0 m 1 1 μ γ k + 1 m 1 i ,
(57)

which converges in the limit to the steady-state solution W k, ss.

W k , ss = W k , opt 1 + γ k
(58)

4.2. Convergence in the mean

The deviation from the steady-state solution in bin k is defined by a weight error [17] given by

V k m = W k m W k , ss
(59)

allowing the CMD algorithm to be expressed as

V k m + 1 = 1 μ γ k μ k X k * m X k m V k m + μ k η k X k m μ γ k W k , ss
(60)

Taking expectations of both sides results in

E V k m + 1 = 1 μ γ k + 1 E V k m μ γ k W k , ss
(61)

By induction, this recursion can be written as

E V k m = 1 μ γ k + 1 m E V k 0 μ γ k W k , ss i = 0 m 1 1 μ γ k + 1 m 1 i
(62)

For this to converge requires the exponential term to decay

1 μ γ k + 1 < 1
(63)

resulting in

μ < 2 1 + γ k
(64)

with the upper bound on μ occurring for maximum constraint violation, given by

μ < 2 1 + γ k , max
(65)

4.3. Convergence in the mean square

Both sides of (60) are first post-multiplied by their respective conjugate transposes, rearranged, and after taking expectations the result is

E V k m + 1 V k * m + 1 = E 1 μ γ k μ k 2 V k m V k * m + μ 2 σ n 2 X k m 2 + μ 2 γ k 2 W k , ss W k , ss * μ γ k E 1 μ γ k μ k V k m W k , ss * μ γ k E 1 μ γ k μ k V k * m W k , ss
(66)

Rearranging and employing the assumptions [18] gives

E V k m + 1 V k * m + 1 = [ ( 1 2 μ γ k + 1 + μ 2 γ k 2 + 2 γ k + 1 ] E V k m V k * m + μ 2 σ n 2 X k m 2 + μ 2 γ k 2 W k , ss m 2 2 μ γ k 1 μ γ k + 1 W k , ss * E V k m .
(67)

As the weight error variance update depends on the mean coefficient error vector, V k (m), a state-space model can be defined as

Z k m = E V k m V k * m E V k m
(68)

and the update defined as the real component of

Z k m + 1 = A Z k m + B
(69)

with

A = A 11 A 12 0 A 22
(70)

and

B = B 1 B 2
(71)

where

A 11 = 1 2 μ γ k + 1 + μ 2 γ k 2 + 2 γ k + 1 A 12 = 2 μ γ k 1 μ γ k + 1 W k , ss * A 22 = 1 μ γ k + 1 B 1 = μ 2 σ n 2 X k m 2 + γ k 2 W k , ss m 2 B 2 = μ γ k W k , ss .
(72)

For stability, it is required that the eigenvalues in the state transition matrix A have a magnitude less than 1 [19], requiring matrix entry A 11 in (70) to be bounded to magnitude less than 1, resulting in

1 2 μ γ k + 1 + μ 2 γ k 2 + 2 γ k + 1 < 1
(73)

or

μ < 2 1 + γ k
(74)

with the upper bound on μ occurring for maximum constraint violation, given by

μ < 2 1 + γ k , max
(75)

5. Simulations

In the simulations, the experimental data from [3] is used for the plant, modeled by a 512-term all-zero filter centered at N/2. The output rescaling algorithm (2) is applied in the frequency-domain to determine the steady-state adaptive filter final coefficients. We demonstrate the improved convergence performance of the CMD algorithm as compared to the CSD algorithm and the leaky LMS in both gain-constrained and power-constrained applications. The values of constraint terms C, α maxk , and C k are held constant in the simulations, but could be shaped over frequency for specific applications. External uncorrelated Gaussian white noise with a variance of 0.01 is added for the convergence comparisons, and an average of 100 runs is plotted. In the simulations, we are assuming prior knowledge of the secondary path transfer function; methods for on-line and off-line secondary path identification are presented in, e.g., [20, 21].

5.1. Gain-constrained algorithm

Using a unity gain secondary path, a 3-dB coefficient gain constraint is imposed, and Figure 4 shows the plant frequency response and the response of the new CMD algorithm, illustrating the clipping effect of the algorithm.

Figure 4
figure 4

Frequency response of gain-constrained CMD algorithm.

Using the experimental data from [3] for the secondary path, the algorithms should converge to the filter in Figure 5, which shows the CMD algorithm response, the plant frequency response, and the secondary path frequency response. The adaptive filter in this case will need to simultaneously identify H(z) and equalize S(z), while still maintaining the gain constraints. The convergence comparison for the three algorithms for the system in Figure 5 for a white noise input is displayed in Figure 6. The CMD algorithm has the fastest convergence performance. The CSD algorithm began converging in a similar manner, but was not able to fully achieve the relatively high 20 dB gain required at the lowest frequencies in Figure 5. However, other simulations without deep secondary path nulls showed that the two algorithms converge to similar final weight values, with the CMD having a faster convergence rate. The leaky LMS attenuates all frequencies (and not just those in violation of the constraint) and has the poorest convergence performance. (The leaky LMS appears smoother than the other two algorithms, but this is due to the logarithmic scale of the y-axis in the plots.)

Figure 5
figure 5

Frequency response of gain-constrained CMD algorithm with secondary path.

Figure 6
figure 6

Convergence comparison, gain-constrained condition.

Figure 7 compares the convergence of the three algorithms for colored noise input, created by filtering the input with a first order AR(1) low pass filter process with coefficients [1–0.95]. The CSD algorithm requires a significant reduction of μ in (8) to maintain stability, resulting in a slow response. However, the increased energy in the lower frequency regions due to the low pass input process improved the misadjustment for this case. The leaky LMS attenuates all frequencies (and not just those in violation of the constraint) and has the poorest convergence performance and highest excess misadjustment.

Figure 7
figure 7

Convergence comparison, gain-constrained condition, AR(1) colored noise input.

5.2. Power-constrained algorithm

The frequency response and convergence of the three algorithms is compared in Figures 8 and 9, respectively, using a single output power constraint of 25% of the unconstrained value (−6 dB). The CMD algorithm has the fastest convergence performance and maintains a 6-dB power reduction over frequency. The CSD displays similar performance, but again was not able to fully achieve the relatively high 20 dB gain required at the lowest frequencies. The leaky LMS has the poorest convergence performance, primarily due to its inability to track the lowest frequencies. Both the CMD and CSD algorithms allow the power constraint to be set explicitly, while the leaky LMS requires a trial and error approach to determine the parameters.

Figure 8
figure 8

Frequency response comparison, power-constrained condition.

Figure 9
figure 9

Convergence comparison, power-constrained condition.

The CMD algorithm frequency response for the individual bin-constrained case using the constraint of (44) is shown in Figure 10 for a 3-dB power limit with a wideband white noise input. Comparing this to Figure 8 illustrates how the new CMD algorithm reduces the output in the frequencies of power-constraint violation, while minimizing the effect at other frequencies.

Figure 10
figure 10

Frequency response, bin-power-constrained condition.

6. Conclusion

A new algorithm was presented, the CMD LMS, for gain-constrained and power-constrained adaptive filter applications. Analysis results were developed for the stability bounds in the mean and mean-square sense. The CMD algorithm was compared to the algorithm developed in [2] and the leaky LMS for filtered-X ANC applications. The new CMD algorithm provides faster convergence and improved frequency response performance, especially in colored noise environments. Additionally, the new CMD algorithm has the ability to handle multiple constraints in both gain-constrained and power-constrained applications.

References

  1. Nelson PA, Elliott SJ: Active Control of Sound. Academic Press, London; 1992.

    Google Scholar 

  2. Rafaely B, Elliot S: A computationally efficient frequency-domain LMS algorithm with constraints on the adaptive filter. IEEE Trans. Signal Process 2000, 48(6):1649-1655. 10.1109/78.845922

    Article  Google Scholar 

  3. Kuo SM, Morgan DR: Active Noise Control Systems: Algorithms and DSP Implementations. Wiley, New York; 1996.

    Google Scholar 

  4. Taringoo F, Poshtan J, Kahaei MH: Analysis of effort constraint algorithm in active noise control systems. EURASIP J. Appl. Signal Process 2006, 2006: 1-9.

    Article  Google Scholar 

  5. Qiu X, Hansen CH: A study of time-domain FXLMS algorithms with control output constraint. J. Acoust. Soc. Am 2001, 1097(6):2815-2823.

    Article  Google Scholar 

  6. Darlington P: Performance surfaces of minimum effort estimators and controllers. IEEE Trans. Signal Process 1995, 43(2):536-539. 10.1109/78.348136

    Article  Google Scholar 

  7. Widrow B: SD Stearns: Adaptive Signal Processing. Prentice-Hall, Upper Saddle River, NJ; 1985.

    Google Scholar 

  8. Nowak MP, Van Veen BD: A constrained transform-domain adaptive IIR filter structure for active noise control. IEEE Trans. Speech Audio Process 1997, 5(5):334-347.

    Article  Google Scholar 

  9. Morgan DR, Thi JC: A delayless subband adaptive filter architecture. IEEE Trans. Signal Process 1995, 43(8):1819-1830. 10.1109/78.403341

    Article  Google Scholar 

  10. Shynk JJ: Frequency-domain and multirate adaptive filtering. IEEE Signal Process. Mag 1993, 9: 337-339.

    Google Scholar 

  11. Haykin S: Adaptive Filter Theory. Prentice-Hall, Upper Saddle River, NJ; 2002.

    Google Scholar 

  12. Fletcher R: Practical Methods of Optimization. Wiley, New York; 1987.

    MATH  Google Scholar 

  13. Kozacky WJ, Ogunfunmi T: Convergence analysis of a frequency-domain adaptive filter with constraints on the output weights. In Proceedings of the Asilomar Conference on Signals, Systems, and Computers. Pacific Grove, USA; 2009:1350-1355.

    Google Scholar 

  14. Elliott SJ, Beck KH: Effort constraints in adaptive feedforward control. IEEE Signal Process. Lett 1996, 3(1):7-9.

    Article  Google Scholar 

  15. Sommen PCW, Van Gerwen PJ, Kotmans HJ, Janssen JEM: Convergence analysis of a frequency-domain adaptive filter with exponential power averaging and generalized window function. IEEE Trans. Circuits Syst 1987, 34(7):788-798. 10.1109/TCS.1987.1086205

    Article  Google Scholar 

  16. Farhang-Boroujeny B, Chan KS: Analysis of the frequency-domain block LMS algorithm. IEEE Trans. Signal Process 2000, 48(8):2332-2342. 10.1109/78.852014

    Article  Google Scholar 

  17. Mayyas K, Aboulnast T: Leaky LMS algorithm: MSE analysis for Gaussian data. IEEE Trans. Signal Process 1997, 45(4):927-934. 10.1109/78.564181

    Article  Google Scholar 

  18. Douglas SC: Performance comparison of two implementations of the leaky LMS adaptive filter. IEEE Trans. Signal Process 1997, 45(8):2125-2129. 10.1109/78.611231

    Article  Google Scholar 

  19. Mayyas K, Aboulnasr T: Leaky LMS: a detailed analysis, in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) . Seattle, USA 1995, 2: 1255-1258.

    Google Scholar 

  20. Kuo SM, Vijayan D: A secondary path modeling technique for active noise control systems. IEEE Trans. Speech Audio Process 1997, 5(4):374-377. 10.1109/89.593319

    Article  Google Scholar 

  21. Akhtar MT, Abe M, Kawamata M: On active noise control systems with online acoustic feedback path modeling. IEEE Trans. Audio Speech Lang. Process 2007, 15(2):593-600.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tokunbo Ogunfunmi.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contribution

WJK and TO derived the equations, carried out and reviewed the simulations, and drafted the manuscript. Both authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kozacky, W.J., Ogunfunmi, T. An active noise control algorithm with gain and power constraints on the adaptive filter. EURASIP J. Adv. Signal Process. 2013, 17 (2013). https://doi.org/10.1186/1687-6180-2013-17

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-17

Keywords