EURASIP Journal on Applied Signal Processing 2005:14, 2187–2195 c ○ 2005 Hindawi Publishing Corporation Neural Network Combination by Fuzzy Integral for Robust Change Detection in Remotely Sensed Imagery

Combining multiple neural networks has been used to improve the decision accuracy in many application fields including pattern recognition and classification. In this paper, we investigate the potential of this approach for land cover change detection. In a first step, we perform many experiments in order to find the optimal individual networks in terms of architecture and training rule. In the second step, different neural network change detectors are combined using a method based on the notion of fuzzy integral. This method combines objective evidences in the form of network outputs, with subjective measures of their performances. Various forms of the fuzzy integral, which are, namely, Choquet integral, Sugeno integral, and two extensions of Sugeno integral with ordered weighted averaging operators, are implemented. Experimental analysis using error matrices and Kappa analysis showed that the fuzzy integral outperforms individual networks and constitutes an appropriate strategy to increase the accuracy of change detection.


INTRODUCTION
Analysis of multitemporal images of remote sensing is used for multiple purposes like environment monitoring and wide-area surveillance. These applications involve the identification of changes in land cover and land use practices. Hence, even, a pair of spatially registered images acquired on the same ground area at different times is analyzed to identify areas that have changed. Commonly, the comparison of independently produced classifications of data is used since it provides complete knowledge upon the change [1]. However, there are major problems associated with this technique. On one hand, its accuracy is critically dependent upon the two individual classifications. On the other hand, it does not allow the detection of subtle changes within a land cover class [2]. Recently, an alternative approach based on simultaneous classification of multitemporal data begins to be used to overcome these drawbacks and allow an automatic extraction of different kinds of change [3,4]. To develop such a change detector, one can adopt statistical classifiers that are widely used in remote sensing such as the max-imum likelihood. However, these algorithms are based on hard and commonly untenable assumptions about the data. Therefore, nonparametric classifiers such as neural networks and fuzzy classifiers are increasingly being used. Presently, we focus our attention on artificial neural networks (ANNs), which have been successfully applied in a wide range of applications including classification and change detection in remotely sensed data [5,6,7]. Theoretically speaking, ANNs are able to achieve an accurate result with high generalization capacity. Nevertheless, in practice, their use poses several problems. In addition to the large variety of training algorithms, we are faced to a vast selection of possible network architectures and setup parameters. A bad choice of these factors affects subjectively the generalization capacity. Moreover, different neural networks may a priori perform differently from a land cover class to another. In this paper, we propose the combination (or fusion) of some neural networks to achieve the best possible performance of change detection. In fact, several researchers have attempted to use multiple neural networks with an appropriate collective decision strategy [8]. Among the available techniques, the fuzzy  integral has enjoyed a strong success in several applications of land cover classification, handwritten recognition, and image sequence analysis [8,9,10,11,12]. This method combines objective evidences in the form of network outputs according to expectations about their relevance. These expectations are estimated by using a fuzzy measure. In a first time, the experimental design includes architectural and training rule selection. Next, we carry out the combination of three neural networks using various forms of the fuzzy integral. Specifically, we use Sugeno integral, Choquet integral, and the extended Sugeno integral by two families of ordered weighted averaging operators (OWA-OR and OWA-AND). The rest of this paper is arranged as follows. Section 2 reviews the neural network change detector as well as its training rules. It presents also the combination via the fuzzy integral. Section 3 summarises experimental results including architectural and training algorithm selection for neural networks and the performance comparison evaluation of combination rules. Finally, Section 4 discusses and gives the main conclusions of the paper.

Neural-network-based change detector
There are two approaches for change detection [2]. The first approach uses comparative analysis of independently produced classifications of data, in which pixel labels of two individual classifications are compared to detect changes. This approach gives complete information over the land cover change but errors in the two classifications appear in the final change map as missed or spurious changes. The second approach is based on simultaneous classification of multitemporal data. It overcomes some limitations of the first approach since the data are handled by the same classifier. Hence, land cover change classes are selected at the classifier input. For instance, if we are interested to assess changes from no urban to urban, we select pixels which were not in the urban class at t 1 and are in this class at t 2 . Similarly, if we want to extract a no-change class such as urban areas, we select pixels that belong to this class in both dates.
In this paper, we adopt the simultaneous analysis approach using artificial neural network classifiers. Recall that an artificial neural network is considered as a mapping device between an input set and an output set. Inside it is constructed from some processing units interconnected by weighted channels according to some architecture [5]. Thus, the neural network change detector has the following structure.
(i) Network input: it receives input data extracted from two or more multitemporal images of the study area. Spectral channels of these images are spatially aligned and concatenated to form the input vector. (ii) Network output: the network output can be encoded by several ways. In our case, we use one output node per land cover category which is either change or nochange class. (iii) Network architecture: the number of hidden layers and their size are determined by the user. In general, for complex classification, like change detection, a network with two hidden layers achieves the best result in terms of the square error at convergence and the generalization ability [6].
For instance, Figure 1 shows the architecture of a twohidden-layer network for change detection. Recall that supervised learning of this network aims to minimize the cost for all possible examples through the input-output relation by modifying iteratively synaptic weights (i.e., minimizing the mean squared error E between actual and desired outputs of the network which are, respectively, Y and Y d ): M is the number of the training patterns. Thereby, synaptic weights are updated at the iteration (t + 1) by ∆w(t + 1) corresponds to the weight change, which is compued for the backpropagation algorithm by η is the step size and α is the momentum. However, the main problem of the backpropagation is that many iterations are required to train a small network. Hence, an alternative algorithm based on the Kalman filtering has been proposed for accelering the training stage of the neural network. For this training rule, the weight change is computed by means of the following equation: e i is the error signal of the layer i computed at each node, µ i is the step size, and k i is the Kalman gain (more details about this training rule are given in [13]).

Fuzzy measures and fuzzy integrals
In this section, we give the basic notions of the fuzzy measure and fuzzy integral.
(A) Fuzzy measure Let Z be a finite set of elements, a set function g : 2 Z → [0, 1] is called fuzzy measure if [7,8,9] The fuzzy measure does not follow the addition rule, that is, However, while combining multiple sources, one must set the fuzzy measure of groups of sources. Therefore, Sugeno proposed the g λ fuzzy measure so that The so-called g λ fuzzy measure satisfies the following property [7,8]: let Z = {z 1 , . . . , z n } be the set of available change detectors. For each change detector z i to be combined, we associate a fuzzy measure g k (z i ) indicating its performance in the class k. For a given pixel, let h k (z i ) be the objective evidence of the change detector z i for the class k. The set of change detectors is then rearranged such that the following relation holds: h k (z 1 ) ≥ · · · ≥ h k (z n ) ≥ 0. We obtain an ascending sequence of change detectors A i = {z 1 , . . . , z i }, whose fuzzy measures are constructed as For each class k, λ is determined by solving an n-1 degree equation: Notice that λ ∈] − 1, . . . , +∞[ with λ = 0. It is important to stress that (7) allows us to construct the fuzzy measures in order to provide both the weight of a single change detector as well as the weight of a subset of change detectors. However, there is no rule which would be followed to attribute g k values. In fact, they can be subjectively assigned by an expert, or computed from the training data [8]. In this paper, g k is expressed by the fuzzy accuracy per land cover class computed using a validation set.
(C) Sugeno integral Sugeno integral I S , of a function h : Z → [0, 1] with respect to a fuzzy measure g over Z, is computed by This integral has been extended by using two special families of ordered weighted averaging operators OWA which are OWA-AND and OWA-OR operators [9].

(D) S-OWA-AND integral
With the OWA-AND operator, the objective evidences h k (z i ) are transformed according tõ The new evidences are then utilized in (9) to compute the new form of Sugeno integral termed I AND .

(E) S-OWA-OR integral
The OWA-OR performs on the values of the fuzzy integral computed for each class before evaluating the final aggregated decision: Note that E is the set of classes. Parameters α and β lie in the unit interval, and could provide somewhat different results when extending the fuzzy integral by OWA operators [9].

(F) Discrete Choquet integral
The discrete Choquet integral of a function h : Z → R + with respect to g is defined as [10] where indices i have been permuted so that . . , z n }, and h k (z 0 ) = 0.

Algorithmic implementation
The basic idea of multiple neural networks is to develop n independently trained networks with relevant features, and combine their outputs to produce an average consensus decision [9] as shown in Figure 2. The fuzzy integral seeks the maximum grade of agreement between objective evidences h k (z i ) according to their performances represented by the fuzzy measures g k (z i ). In Algorithm 1, we adapt the pseudocode of combination given in [8] to the work presented here. We define the performance of a neural network z i in the class k by where R k denotes recognition rate, R c commission rate, R o omission rate, and R r reject rate.

Description of the study area and evaluation criteria
The study area is located to the east of Algiers, Algeria. It is a coastal region which comprises the Isser River and watershed as well as several land cover types which are grouped as follows. Two SPOT high resolution visible (HRV) images acquired in May 1989 and June 1991, respectively, were selected for testing the validity of the proposed change detection method. During this period, the region has undergone rapid transitions from the classes' water, vegetation, and construction to soil. Thereby, we narrowed our attention to classify these land cover changes. However, satellite data depict other changes caused by the presence of clouds in the second image. To avoid all surprising behaviors of the change detection system towards these changes, an additional class "X⇒ clouds" was taken into account (X denotes whatever land cover class in the first image). The three spectral channels (XS 1 , XS 2 , and XS 3 ) of both images were spatially aligned by using ground control points with a second-order polynomial warp on the ENVI software. The registration was achieved with a residual error less than 0.13 pixels. As illustration, Figure 3 depicts only XS 1 bands of both multitemporal images.  Since the selection of the training data is crucial to the quality of the result, we used reference maps obtained from unsupervised classifications of available images. Unfortunately, these maps constitute the unique source of ground truth which can be used to select training data. In order to validate the results, the dataset for each land cover class was randomly split into three disjoint sets in order to be used, respectively, in the training stage, in the test stage, and for calculating fuzzy measures (Table 1). These data are linearly scaled between 0 and 1 by dividing radiometric values by 255.
Change detection performance is evaluated by using the usual error (or confusion) matrix which highlights the good class allocations or accuracy rates per land cover class. In addition, many accuracy measures can be derived from this matrix. We use then the overall recognition rate (ORR), which is computed by taking the ratio between the sum of the good allocations and the total number of test data, and the Kappa coefficient computed by the Khat. This latter is computed by using all elements of the error matrix. The more the Khat is closed to 1 (100%), the more the change detector is reliable [14].

Architectural and training rule selection
This experiment is conducted to seek the optimal neural network change detector in terms of architecture and training rule. First, the backpropagation (BP) and Kalman filtering (KF) algorithms were used to train the same network in order to compare their performances. Both algorithms were started from the same initial weights with random values ranging from −0.01 through +0.01 while setup parameters were selected to maximize the performance of each algorithm. Figure 4 plots the evolution of the mean squared error (MSE) of the network across the time. As can be seen, KF converges more rapidly than the BP algorithm. This outcome is confirmed by the second test which calculates the Khat for various iterations. The result is depicted in Figure 5. We notice that KF rule reaches its maximal value of Khat after few iterations. In return, the Khat of the BP increases gradually across the time. In order to evaluate the ability to detect changes, we plot the ROC curves for both algorithms. These curves are formed by points whose coordinates are the false positive (FP) ratio (which expresses the amount of false alarms represented on the x-axis) and the true positive (TP) ratio (which expresses the amount of good detections represented on the y-axis). The change detection process is considered more effective if the area under the ROC curve and above the main diagonal is big. Figure 6 presents the obtained curves which show that KF algorithm is more accurate in terms of change detection. On the other hand, we perform another test to find the optimal architecture for change detection. Using the KF algorithm, we trained many neural networks comprising one and two hidden layers in which the number of nodes varies between 8 through 50. The result reported in Figure 7 indicates that in all cases the network with two hidden layers outperforms the one-hidden-layer net. Moreover, for both architectures, the increase of the number of hidden nodes does not necessarily lead to an improvement of performance. In fact, the best results are obtained with architectures in which the number of nodes per hidden layer varies between 10 and 20. Roughly speaking, the KF rule improves the performance of the neural network in both training and generalization stages. Nevertheless, different neural  networks perform differently and make their errors in different regions of input space; we then conjecture that the combination of multiple neural networks may improve change detection.

Accuracy assessment of combination rules
The optimal number of experts to be combined depends on the task at hand, and should be found experimentally. This is not the purpose of the present work, we want just to evaluate the usefulness of the combination concept for land cover change detection. However, it is important to stress that when combining only two networks, if a network has a poor accuracy in a given class, the accuracy of the combination rule will be lower than that of the most precise network. Therefore, the number of combined networks must be superior to 2. Furthermore, it has been shown in [8] that when using the fuzzy integral as a combination rule, the recognition rate depends on the g values, if they change, the value of the fuzzy integral will change.
In the present work, three different networks having two hidden layers which contain, respectively, 10, 15, and 20 nodes were chosen to be combined according to the performances in terms of ORR given in Figure 7. The KF algorithm was used in the training stage which was stopped after 400 iterations. The values of g computed using (13) as well as the λ of each land cover class are reported in Table 2. g expresses the degree of importance of a given network in a particular class. Moreover, all λ values are closed to -1 because the sum of the different degrees of importance is greater than 1. In this case, the degree of importance may be interpreted as a plausibility value [8]. Banon [15] showed that λ ≤ 0 if g is a plausibility measure. On the other hand, parameters of S-OWA-OR and S-OWA-AND were experimentally fixed at 0.2 and 0.5, respectively. Table 3 summarizes the error matrices obtained for the different change detectors, while Table 4 gives the corresponding ORR and Khat values. As expected, while the performance of individual neural networks varies from a land cover class to another, the combination rules give a significant improvement in the recognition rate of the majority of classes, especially the three change classes (classes whose labels are 5, 6, and 7). In consequent, they produce higher   overall accuracy, with a gain more than 3% (with I C ) and 8% (with I OR ) over the best individual network. More specifically, the I OR integral presents the most satisfactory results in terms of ORR and Khat. However, a surprising outcome in the class Construction ⇒ soil (class 6) which was the most critical one where the I C exhibited still the bad accuracy (63.46%).

Visual inspection
As a result of implementing the different change detectors presented above, the maps depicted in Figure 8 were obtained. In this figure, only the change classes, which are namely, construction ⇒ soil, water ⇒ soil, and vegetation ⇒ soil are depicted since we are interested in change detection. It is easy to see that individual neural networks have limited generalization capacity. In fact, they cannot detect a large amount of areas of the class vegetation ⇒ soil (see circles in Figures 8a and 8b). Moreover, they produce an important number of spurious changes in the class construction ⇒ soil (see rectangles in Figures 8a and 8b). On the contrary, fuzzy integrals provide much cleaner change detection maps, with fewer numbers of missed and isolated spurious changes. Thus, we infer that the combination rules discriminate better between classes, which demonstrates once again the effectiveness of change detector combination.

DISCUSSION AND CONCLUSIONS
In the last recent years, artificial neural networks have shown a particular relevance to land cover change detection because they provide complete information on the nature of change which is not allowed by all change detection schemes. However, with this method the user is faced to a large number of training rules and many possible architectures and setup parameters. The problem is that different networks produce different results since they make their errors in different regions of input space. In fact, there are various neural network optimization methods such as metropolis scheme and genetic algorithms [9] which derive the optimal value for only one parameter (the network architecture or the best sequence of the initialization of weights). Therefore, the combination of different neural networks seems to be more interesting since it aggregates the decision functions of different networks which can have different architectures, different training rules, and different setup parameters. In this paper, we investigated the potential of combining multiple neural  networks for change detection in remotely sensed imagery. Our contribution was twofold. First, extensive experiments were carried out to obtain the optimal architectural selection for the neural network. In addition, the performance of the backpropagation algorithm which is the standard rule for training neural networks was compared to that of Kalman filtering algorithm. The results indicated that Kalman filtering algorithm is superior to the backpropagation in terms of convergence rapidity and change detection accuracy. Moreover, the use of artificial neural networks in remote sensing image processing was effectively highlighted for change detection. In a second step, we combined three different neural networks by using a strategy based on fuzzy integrals to increase change detection accuracy. The main advantage of this method is that it takes into account the reliability for individual networks as well as for a subset of networks. The conventional Sugeno and Choquet fuzzy integrals were used as combination operators. In addition, we implemented two extensions of Sugeno integral based on OWA-AND and OWA-OR operators. Notice that the OWA-AND operator requires that all change detectors make the right decisions, while the OWA-OR requires that at least one change detector give the right decision. This is the reason for which I OR and I S outperformed I AND . The results obtained by combining three different networks demonstrated the effectiveness of the combination concept. Specifically, it has been shown that each combiner provides an overall accuracy higher than those of individual networks and tends to improve the generalization performance by equalizing accuracy rates in individual classes. Furthermore, it has been shown that fuzzy integrals produced much closer results, which indicate that their different forms do not conceal the fact that they all stem from the same concept. Finally, throughout this study, the fuzzy integral appeared to be a straightforward computationally attractable approach which can enhance significantly the change detection in remotely sensed data.

Call for Papers
Spatial sound reproduction has become widespread in the form of multichannel audio, particularly through home theater systems. Reproduction systems from binaural (by headphones) to hundreds of loudspeaker channels (such as wave field synthesis) are entering practical use. The application potential of spatial sound is much wider than multichannel sound, however, and research in the field is active. Spatial sound covers for example the capturing, analysis, coding, synthesis, reproduction, and perception of spatial aspects in audio and acoustics.
In addition to the topics mentioned above, research in virtual acoustics broadens the field. Virtual acoustics includes techniques and methods to create realistic percepts of sound sources and acoustic environments that do not exist naturally but are rendered by advanced reproduction systems using loudspeakers or headphones. Augmented acoustic and audio environments contain both real and virtual acoustic components.
Spatial sound and virtual acoustics are among the major research and application areas in audio signal processing. Topics of active study range from new basic research ideas to improvement of existing applications. Understanding of spatial sound perception by humans is also an important area, in fact a prerequisite to advanced forms of spatial sound and virtual acoustics technology.
This special issue will focus on recent developments in this key research area. Topics of interest include (but are not limited to): • Since its invention in the 19th century when it was little more than a scientific curiosity, the electrocardiogram (ECG) has developed into one of the most important and widely used quantitative diagnostic tools in medicine. It is essential for the identification of disorders of the cardiac rhythm, extremely useful for the diagnosis and management of heart abnormalities such as myocardial infarction (heart attack), and offers helpful clues to the presence of generalised disorders that affect the rest of the body, such as electrolyte disturbances and drug intoxication.
Recording and analysis of the ECG now involves a considerable amount of signal processing for S/N enhancement, beat detection, automated classification, and compression. These involve a whole variety of innovative signal processing methods, including adaptive techniques, time-frequency and time-scale procedures, artificial neural networks and fuzzy logic, higher-order statistics and nonlinear schemes, fractals, hierarchical trees, Bayesian approaches, and parametric models, amongst others.
This special issue will review the current status of ECG signal processing and analysis, with particular regard to recent innovations. It will report major achievements of academic and commercial research institutions and individuals, and provide an insight into future developments within this exciting and challenging area.
This special issue will focus on recent developments in this key research area. Topics of interest include (but are not limited to):

Call for Papers
Recently, end users and utility companies are increasingly concerned with perturbations originated from electrical power quality variations. Investigations are being carried out to completely characterize not only the old traditional type of problems, but also new ones that have arisen as a result of massive use of nonlinear loads and electronics-based equipment in residences, commercial centers, and industrial plants. These nonlinear load effects are aggravated by massive power system interconnections, increasing number of different power sources, and climatic changes. In order to improve the capability of equipments applied to monitoring the power quality of transmission and distribution power lines, power systems have been facing new analysis and synthesis paradigms, mostly supported by signal processing techniques. The analysis and synthesis of emerging power quality and power system problems led to new research frontiers for the signal processing community, focused on the development and combination of computational intelligence, source coding, pattern recognition, multirate systems, statistical estimation, adaptive signal processing, and other digital processing techniques, implemented in either DSP-based, PC-based, or FPGA-based solutions.
The goal of this proposal is to introduce powerful and efficient real-time or almost-real-time signal processing tools for dealing with the emerging power quality problems. These techniques take into account power-line signals and complementary information, such as climatic changes.
This special issue will focus on recent developments in this key research area. Topics of interest include (but are not limited to): Digital signal processing techniques applied to power quality applications are a very attractive and stimulating area of research. Its results will provide, in the near future, new standards for the decentralized and real-time monitoring of transmission and distribution systems, allowing to closely follow and predict power system performance. As a result, the power systems will be more easily planned, expanded, controlled, managed, and supervised.
Authors should follow the EURASIP JASP manuscript format described at http://www.hindawi.com/journals/asp/. Prospective authors should submit an electronic copy of their complete manuscripts through the EURASIP JASP manuscript tracking system at http://www.mstracking.com/asp/, according to the following timetable:

Call for Papers
When designing a system for image acquisition, there is generally a desire for high spatial resolution and a wide fieldof-view. To achieve this, a camera system must typically employ small f-number optics. This produces an image with very high spatial-frequency bandwidth at the focal plane. To avoid aliasing caused by undersampling, the corresponding focal plane array (FPA) must be sufficiently dense. However, cost and fabrication complexities may make this impractical. More fundamentally, smaller detectors capture fewer photons, which can lead to potentially severe noise levels in the acquired imagery. Considering these factors, one may choose to accept a certain level of undersampling or to sacrifice some optical resolution and/or field-of-view. In image super-resolution (SR), postprocessing is used to obtain images with resolutions that go beyond the conventional limits of the uncompensated imaging system. In some systems, the primary limiting factor is the optical resolution of the image in the focal plane as defined by the cut-off frequency of the optics. We use the term "optical SR" to refer to SR methods that aim to create an image with valid spatial-frequency content that goes beyond the cut-off frequency of the optics. Such techniques typically must rely on extensive a priori information. In other image acquisition systems, the limiting factor may be the density of the FPA, subsequent postprocessing requirements, or transmission bitrate constraints that require data compression. We refer to the process of overcoming the limitations of the FPA in order to obtain the full resolution afforded by the selected optics as "detector SR." Note that some methods may seek to perform both optical and detector SR.
Detector SR algorithms generally process a set of lowresolution aliased frames from a video sequence to produce a high-resolution frame. When subpixel relative motion is present between the objects in the scene and the detector array, a unique set of scene samples are acquired for each frame. This provides the mechanism for effectively increasing the spatial sampling rate of the imaging system without reducing the physical size of the detectors.
With increasing interest in surveillance and the proliferation of digital imaging and video, SR has become a rapidly growing field. Recent advances in SR include innovative algorithms, generalized methods, real-time implementations, and novel applications. The purpose of this special issue is to present leading research and development in the area of super-resolution for digital video. Topics of interest for this special issue include but are not limited to: • Detector and optical SR algorithms for video • Real-time or near-real-time SR implementations • Innovative color SR processing • Novel SR applications such as improved object detection, recognition, and tracking • Super-resolution from compressed video • Subpixel image registration and optical flow

Call for Papers
In recent years, increased demand for fast Internet access and new multimedia services, the development of new and feasible signal processing techniques associated with faster and low-cost digital signal processors, as well as the deregulation of the telecommunications market have placed major emphasis on the value of investigating hostile media, such as powerline (PL) channels for high-rate data transmissions.
Nowadays, some companies are offering powerline communications (PLC) modems with mean and peak bit-rates around 100 Mbps and 200 Mbps, respectively. However, advanced broadband powerline communications (BPLC) modems will surpass this performance. For accomplishing it, some special schemes or solutions for coping with the following issues should be addressed: (i) considerable differences between powerline network topologies; (ii) hostile properties of PL channels, such as attenuation proportional to high frequencies and long distances, high-power impulse noise occurrences, time-varying behavior, and strong inter-symbol interference (ISI) effects; (iv) electromagnetic compatibility with other well-established communication systems working in the same spectrum, (v) climatic conditions in different parts of the world; (vii) reliability and QoS guarantee for video and voice transmissions; and (vi) different demands and needs from developed, developing, and poor countries.
These issues can lead to exciting research frontiers with very promising results if signal processing, digital communication, and computational intelligence techniques are effectively and efficiently combined.
The goal of this special issue is to introduce signal processing, digital communication, and computational intelligence tools either individually or in combined form for advancing reliable and powerful future generations of powerline communication solutions that can be suited with for applications in developed, developing, and poor countries.
Topics of interest include (but are not limited to) • Multicarrier, spread spectrum, and single carrier techniques • Channel modeling Authors should follow the EURASIP JASP manuscript format described at the journal site http://asp.hindawi.com/. Prospective authors should submit an electronic copy of their complete manuscripts through the EURASIP JASP manuscript tracking system at http://www.mstracking.com/asp/, according to the following timetable:

Transforming Signal Processing Applications into Parallel Implementations Call for Papers
There is an increasing need to develop efficient "systemlevel" models, methods, and tools to support designers to quickly transform signal processing application specification to heterogeneous hardware and software architectures such as arrays of DSPs, heterogeneous platforms involving microprocessors, DSPs and FPGAs, and other evolving multiprocessor SoC architectures. Typically, the design process involves aspects of application and architecture modeling as well as transformations to translate the application models to architecture models for subsequent performance analysis and design space exploration. Accurate predictions are indispensable because next generation signal processing applications, for example, audio, video, and array signal processing impose high throughput, real-time and energy constraints that can no longer be served by a single DSP.
There are a number of key issues in transforming application models into parallel implementations that are not addressed in current approaches. These are engineering the application specification, transforming application specification, or representation of the architecture specification as well as communication models such as data transfer and synchronization primitives in both models.
The purpose of this call for papers is to address approaches that include application transformations in the performance, analysis, and design space exploration efforts when taking signal processing applications to concurrent and parallel implementations. The Guest Editors are soliciting contributions in joint application and architecture space exploration that outperform the current architecture-only design space exploration methods and tools.
Topics of interest for this special issue include but are not limited to: • modeling applications in terms of (abstract) control-dataflow graph, dataflow graph, and process network models of computation (MoC) • transforming application models or algorithmic engineering • transforming application MoCs to architecture MoCs • joint application and architecture space exploration • joint application and architecture performance analysis • extending the concept of algorithmic engineering to architecture engineering • design cases and applications mapped on multiprocessor, homogeneous, or heterogeneous SOCs, showing joint optimization of application and architecture Authors should follow the EURASIP JASP manuscript format described at http://www.hindawi.com/journals/asp/. Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP manuscript tracking system at http://www.mstracking.com/asp/, according to the following timetable:

Call for Papers
Facial image processing is an area of research dedicated to the extraction and analysis of information about human faces; information which is known to play a central role in social interactions including recognition, emotion, and intention. Over the last decade, it has become a very active research field that deals with face detection and tracking, facial feature detection, face recognition, facial expression and emotion recognition, face coding, and virtual face synthesis.
With the introduction of new powerful machine learning techniques, statistical classification methods, and complex deformable models, recent progresses have made possible a large number of applications in areas such as modelbased video coding, image retrieval, surveillance and biometrics, visual speech understanding, virtual characters for e-learning, online marketing or entertainment, intelligent human-computer interaction, and others.
However, lots of progress is yet to be made to provide more robust systems, especially when dealing with pose and illumination changes in complex natural scenes. If most approaches focus naturally on processing from still images, emerging techniques may also consider different inputs. For instance, video is becoming ubiquitous and very affordable, and there is growing demand for vision-based humanoriented applications, ranging from security to humancomputer interaction and video annotation.
Taking into account temporal information and the dynamics of faces may also ease applications like, for instance, facial expression and face recognition which are still very challenging tasks.
Capturing 3D data may as well become very affordable and processing such data can lead to enhanced systems, more robust to illumination effects and where discriminant information may be more easily retrieved.
The goal of this special issue is to provide original contributions in the field of facial image processing.
Topics of interest include (but are not limited to): •

Special Issue on
Genetic Regulatory Networks

Call for Papers
Genomic signal processing (GSP) has been defined as the analysis, processing, and use of genomic signals for gaining biological knowledge and the translation of that knowledge into systems-based applications. A major goal of GSP is to characterize genetic regulation and its effects on cellular behaviour and function, thereby leading to a functional understanding of diseases and the development of systems-based medical solutions. This involves the development of nonlinear dynamical network models for genomic regulation and of mathematically grounded diagnostic and therapeutic tools based on those models. This special issue is devoted to genetic regulatory networks. We desire high-quality papers on all network issues, including: •

Call for Papers
This special issue aims to draw together work on sparse system identification and partial-update adaptive filters. These research problems can be considered as exploiting sparseness in different "domains", namely, adaptive filter coefficient vector and update regressor vector. This special issue will further develop the positive outcomes of the EUSIPCO 2005 special session on sparse system identification and partialupdate adaptive algorithms. Identification of sparse and/or high-order FIR systems has always been a challenging research problem. In many applications, including acoustic/network echo cancellation and channel equalization, the system to be identified can be characterized as sparse and/or long. Partial-update adaptive filtering algorithms were proposed to address the large computational complexity associated with long adaptive filters. However, the initial partial-update algorithms had to incur performance losses, such as slow convergence, compared with full-update algorithms because of the absence of clever updating approaches. More recently, better partial-update techniques have been developed that are capable of minimizing the performance loss. In certain applications, these partial-update techniques have even been observed to produce improved convergence performance with respect to a full-update algorithm. The potential performance gain that can be achieved by partial-update algorithms is an important feature of these adaptive techniques that was not recognized earlier. The notion of partial-update adaptive filtering has been gaining momentum thanks to the recognition of its complexity and performance advantages.
Sparse system identification is a vital requirement for fast converging adaptive filters in, for example, certain specific deployments of echo cancellation. Recent advances, such as IPNLMS, have been used to good effect in network echo cancellation for VoIP gateways (to take account of unpredictable bulk delays in IP network propagation) and acoustic echo cancellation (to handle the unknown propagation delay of the direct acoustic path). It is known that several research labs are working on these problems with new solutions emerging.
This special issue will focus on recent developments in this key research area. Topics of interest include (but are not limited to): • Adaptive filters employing partial-update methods, • Time-domain and transform-domain implementations of partial-update adaptive filters, • Convergence and complexity analysis of partialupdate schemes, • Single and multichannel algorithms employing partial updates, • Adaptive algorithms for sparse system identification, • Applications of partial-update adaptive filters and sparse system identification in echo/noise cancellation, acoustics, and telecommunications, • Partial-update filters for sparse system identification.

Call for Papers
Over the past few decades, medical computed imaging has established its role as a major clinical tool. Technical advancements as well as advanced new algorithms have substantially improved spatial and temporal resolution and contrast. Nevertheless, despite these improvements single-modality scans cannot always provide the full clinical picture. Resolution and image quality are often compromised in order to obtain functional images. This is particularly true for NM imaging and has led to the development of hybrid scanners such as PET/CT and SPECT/CT. Also, the old problem of multimodality image fusion has and probably will continue to attract a lot of research. This has motivated us to edit a special issue which will provide a state-of-the-art picture of multimodality imaging.
The International Journal of Biomedical Imaging (IJBI) follows the Open Access model and publishes accepted papers on the web and in print. It targets rapid review, permanent archiving, high visibility, and lasting impact. In this special issue, the topics covered will include, but are not limited to, the following areas: • New approaches and applications of PET/CT and SPECT/CT hybrid scanners • Methods for image fusion of MRI and/or CT and/or Ultrasound • Algorithms for data fusion and hybrid image reconstruction and display • Methods for dual-modality scans alignment using fiducial markers, masks, and so forth • Software-based multimodality image alignment • Novel dual-modality scanning approaches • Real-time navigation for image-guided intervention using multimodality systems

NEWS RELEASE
Nominations Invited for the Institute of Acoustics

A B Wood Medal
The Institute of Acoustics, the UK's leading professional body for those working in acoustics, noise and vibration, is inviting nominations for its prestigious A B Wood Medal for the year 2006.
The A B Wood Medal and prize is presented to an individual, usually under the age of 35, for distinguished contributions to the application of underwater acoustics. The award is made annually, in even numbered years to a person from Europe and in odd numbered years to someone from the USA/Canada. The 2005 Medal was awarded to Dr A Thode from the USA for his innovative, interdisciplinary research in ocean and marine mammal acoustics.
Nominations should consist of the candidate's CV, clearly identifying peer reviewed publications, and a letter of endorsement from the nominator identifying the contribution the candidate has made to underwater acoustics. In addition, there should be a further reference from a person involved in underwater acoustics and not closely associated with the candidate. Nominees should be citizens of a Dr Tony Jones, President of the Institute of Acoustics, comments, "A B Wood was a modest man who took delight in helping his younger colleagues. It is therefore appropriate that this prestigious award should be designed to recognise the contributions of young acousticians." Further information and an nomination form can be found on the Institute's website at www.ioa.org.uk.

A B Wood
Albert Beaumont Wood was born in Yorkshire in 1890 and graduated from Manchester University in 1912. He became one of the first two research scientists at the Admiralty to work on antisubmarine defence. He designed the first directional hydrophone and was well known for the many contributions he made to the science of underwater acoustics and for the help he gave to younger colleagues. The medal was instituted after his death by his many friends on both sides of the Atlantic and was administered by the Institute of Physics until the formation of the Institute of Acoustics in 1974.

EDITORS NOTES
The Institute of Acoustics is the UK's professional body for those working in acoustics, noise and vibration. It was formed in 1974 from the amalgamation of the Acoustics Group of the Institute of Physics and the British Acoustical Society (a daughter society of the Institution of Mechanical Engineers). The Institute of Acoustics is a nominated body of the Engineering Council, offering registration at Chartered and Incorporated Engineer levels. The Institute has some 2500 members from a rich diversity of backgrounds, with engineers, scientists, educators, lawyers, occupational hygienists, architects and environmental health officers among their number. This multidisciplinary culture provides a productive environment for cross-fertilisation of ideas and initiatives. The range of interests of members within the world of acoustics is equally wide, embracing such aspects as aerodynamics, architectural acoustics, building acoustics, electroacoustics, engineering dynamics, noise and vibration, hearing, speech, underwater acoustics, together with a variety of environmental aspects. The lively nature of the Institute is demonstrated by the breadth of its learned society programmes. T he popularity of multimedia content has led to the widespread distribution and consumption of digital multimedia data. As a result of the relative ease with which individuals may now alter and repackage digital content, ensuring that media content is employed by authorized users for its intended purpose is becoming an issue of eminent importance to both governmental security and commercial applications. Digital fingerprinting is a class of multimedia forensic technologies to track and identify entities involved in the illegal manipulation and unauthorized usage of multimedia content, thereby protecting the sensitive nature of multimedia data as well as its commercial value after the content has been delivered to a recipient.
"Multimedia Fingerprinting Forensics for Traitor Tracing" covers the essential aspects of research in this emerging technology, and explains the latest development in this field. It describes the framework of multimedia fingerprinting, discusses the challenges that may be faced when enforcing usage policies, and investigates the design of fingerprints that cope with new families of multiuser attacks that may be mounted against media fingerprints. The discussion provided in the book highlights challenging problems as well as future trends in this research field, providing readers with a broader view of the evolution of the young field of multimedia forensics.

Topics and features:
Comprehensive coverage of digital watermarking and fingerprinting in multimedia forensics for a number of media types; Detailed discussion on challenges in multimedia fingerprinting and analysis of effective multiuser collusion attacks on digital fingerprinting; Thorough investigation of fingerprint design and performance analysis for addressing different application concerns arising in multimedia fingerprinting; Well-organized explanation of problems and solutions, such as order-statistics-based nonlinear collusion attacks, efficient detection and identification of colluders, group-oriented fingerprint design, and anticollusion codes for multimedia fingerprinting.

������������������������������������������ ���������������������
The EURASIP Book Series on Signal Processing and Communications publishes monographs, edited volumes, and textbooks on Signal Processing and Communications. For more information about the series please visit: http://hindawi.com/books/spc/about.html For more information and online orders please visit: http://www.hindawi.com/books/spc/volume-2/ For any inquiries on how to order this title please contact books.orders@hindawi.com EURASIP Book Series on SP&C, Volume 2, ISBN 977-5945-07-0 R ecent advances in genomic studies have stimulated synergetic research and development in many cross-disciplinary areas. Genomic data, especially the recent large-scale microarray gene expression data, represents enormous challenges for signal processing and statistics in processing these vast data to reveal the complex biological functionality. This perspective naturally leads to a new field, genomic signal processing (GSP), which studies the processing of genomic signals by integrating the theory of signal processing and statistics. Written by an international, interdisciplinary team of authors, this invaluable edited volume is accessible to students just entering this emergent field, and to researchers, both in academia and industry, in the fields of molecular biology, engineering, statistics, and signal processing. The book provides tutorial-level overviews and addresses the specific needs of genomic signal processing students and researchers as a reference book.
The book aims to address current genomic challenges by exploiting potential synergies between genomics, signal processing, and statistics, with special emphasis on signal processing and statistical tools for structural and functional understanding of genomic data. The book is partitioned into three parts. In part I, a brief history of genomic research and a background introduction from both biological and signal-processing/ statistical perspectives are provided so that readers can easily follow the material presented in the rest of the book. In part II, overviews of state-of-the-art techniques are provided. We start with a chapter on sequence analysis, and follow with chapters on feature selection, clustering, and classification of microarray data. The next three chapters discuss the modeling, analysis, and simulation of biological regulatory networks, especially gene regulatory networks based on Boolean and Bayesian approaches. The next two chapters treat visualization and compression of gene data, and supercomputer implementation of genomic signal processing systems. Part II concludes with two chapters on systems biology and medical implications of genomic research. Finally, part III discusses the future trends in genomic signal processing and statistics research.

GENOMIC SIGNAL PROCESSING AND STATISTICS
Edited by: Edward R. Dougherty, Ilya Shmulevich, Jie Chen, and Z. Jane Wang