 Research
 Open Access
 Published:
Short range tracking of rainy clouds by multiimage flow processing of Xband radar data
EURASIP Journal on Advances in Signal Processing volume 2011, Article number: 67 (2011)
Abstract
Two innovative algorithms for motion tracking and monitoring of rainy clouds from radar images are proposed. The methods are generalizations of classical optical flow techniques, including a production term (modelling formation, growth or depletion of clouds) in the model to be fit to the data. Multiple images are processed and different smoothness constraints are introduced. When applied to simulated maps (including additive noise up to 10 dB of SNR) showing formation and propagation of objects with different directions and velocities, the algorithms identified correctly the production and the flow, and were stable to noise when the number of images was sufficiently high (about 10). The average error was about 0.06 pixels (px) per sampling interval (ΔT) in identifying the modulus of the flow (velocities between 0.25 and 2 px/ΔT were simulated) and about 1° in detecting its direction (varying between 0° and 90°). An example of application to Xband radar rainfall rate images detected during a stratiform rainfall is shown. Different directions of the flow were detected when investigating short (10 min) or long time ranges (8 h), in line with the chaotic behaviour of the weather condition. The algorithms can be applied to investigate the local stability of meteorological conditions with potential future applications in nowcasting.
1. Introduction
Quantitative precipitation monitoring and forecast is an important issue in water management, in flood forecasting, and in predicting hazardous conditions. Specific problems are the distinction of rain from snow [1], the monitoring of basins subject to floods or of areas prone to landslides [2], and the forecast of sudden rainfall over strategic regions, like as airports [3]. In these situations, detailed areal measurements of precipitation over a local spatial scale of range of a few tens of km and on a short time scale (e.g., 30 min, nowcasting) are needed.
For the remote sensing of rainfall, rain gauges dispersed on the surface area of interest have been used. Nevertheless, they may be affected by gross mistakes, as wind, snowfall, drop size distribution influence the measure. Moreover, a very dense network of gauges is needed, as the correlation between the measurements taken in two rain gauges is poor even at 500 m distance over time scales of 30 min [4].
As an alternative, radars may be used to study rainy clouds. Rainfall investigations have been usually conducted using Sband or Cband polarimetric radars, which use radiations with long wavelengths (about 10 and 5 cm, respectively) which allow for low attenuations [5]. These radar constellations are typically used for long range meteorological target detection. On the other hand, Xband radars can work only at short ranges and their radiations are significantly affected by attenuation behind heavy precipitation (due to the smaller wavelength, of about 3 cm). However, they have finer resolution and smaller sized antennas than those required by S or Cband radars, resulting in easier mobility and lower costs [6]. Moreover, using Xband radars has some advantages over Sband and Cband radars when investigating regions exhibiting a complex orography [4, 7].
Radar images have been used for precipitation nowcasting. Different techniques are based on correlation of successive images [8], on tracking the centroid of an object [9], and on the use of numerical prediction of wind advection [10]. Classical optical flow methods can estimate the motion of objects comparing a couple of subsequent images [11, 12]. Some multiframe techniques have also been introduced, in order to enhance robustness to noise and improve the discriminating capabilities of the algorithm [13, 14]. When applied to precipitation forecasting [15, 16], optical flow techniques track rainfall objects assuming that they remain constant in intensity (brightness constancy condition). On the other hand, shower clouds and the fine structure of stratiform rain bands can develop in a few minutes [17]. Their identification is not possible with classical optical flow techniques and their prediction is very difficult and requires detailed local information (which could be detected with Xband radars).
Other local weather forecast algorithms are based on the analysis of a few timeseries representing meteorological variables like air temperature and humidity, visibility distance, wind speed and direction, precipitation type and rate, cloud cover, and lightning [18]. Nonlinear prediction methods (e.g., based on artificial neural networks models) usually compare the present condition with similar ones happened in the past and saved in a database. Nevertheless, meteorological variables have chaotic dynamics which could abruptly lead to completely different conditions, even starting from similar ones [19, 20]. The identification of stability or instability of the weather condition by a short range investigation could enhance the performances of local predictions by timeseries analysis.
This article is devoted to the identification of formation and propagation of rainy clouds in short range, using data from an Xband radar. An innovative approach is proposed, based on classical optical flow theory, but estimating the formation and decay of rainy clouds in addition to their movements. This analysis can hardly be used for forecasting purposes, as the short spatial range investigated limits the time range of the reliable prediction, especially in the presence of fast rainy clouds or of showers. Nevertheless, it could provide valuable indications on the stability or instability of weather conditions, which could feed a timeseries based local model for rain prediction.
2. Methods
2.1. Description of experimental data
A new version of the Xband radar described in [4] was used. The radar transmits rectangular 10 kW peak power pulses (400 ns duration) at a frequency of 9.41 GHz, through a parabolic antenna with 3.6° beamwidth and 34 dB maximum gain. A maximum coverage of 30 km can be reached with an angular resolution of about 3° and a range resolution equal to 120 m.
Received power related to meteorological echoes within each single radar volume bin is converted into the averaged reflectivity inside that volume. Two dimensional (2D) maps of reflectivity (Z in mm^{6}m^{3}) are provided as output to the preprocessing stage with a sampling interval of 1 min. Radar reflectivity Z was converted into rainfall rate R (measured in mm/h), using the Marshall and Palmer ZR relation [21]
where A and b are parameters that can be estimated by fitting experimental data from rain gauges placed on the area investigated by the radar. In this study, radar reflectivity data were converted into rainfall rates using the relation introduced in [22], fitting data of 7 years recorded in central Europe:
2.2. Mathematical model
Different radar images of rainfall rate can be compared to identify rainy clouds formation, growing or propagation. Radar maps (within the considered time window) are modelled by the following equation
where I(x, y, t) is the intensity of the image as a function of the spatial coordinates (x, y) and time t (representing the radar reflectivity Z), \stackrel{\u20d7}{v}\left(x,y\right)=\left({v}_{1}\left(x,y\right),\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{v}_{2}\left(x,y\right)\right) is the velocity flow and F(x, y) is a production term (which describes generation if positive, depletion if negative). The left hand side of Equation 3 is the total time derivative along the path of a propagating object of the image, that during its propagation may also vary its amplitude as an effect of the production term F(x, y). This model is quite general, but is based on assumptions which are not physical. For example, the distribution of clouds is 3D, whereas our model is 2D. Thus, the merging, intersection or growing of the available images of clouds could be the result of a complicated 3D motion. Thus, caution is needed in the interpretation of results.
In practice, both space and time variables are sampled. Thus, differential operators in Equation 3 are estimated within some approximation from sampled images. Velocity flow and production term are assumed to be constant in the considered time window, which is sampled by N radar images.
When neglecting the production term, Equation 3 describes only flow. Such a model was applied to investigate different moving objects, for example to track images within the scenes from a television signal [23].
It is not possible to solve directly optical flow problems from two images, as two unknowns (the two components of the velocity flow) are to be estimated from one equation (aperture problem [24]). In the case of problem (3), the production term is a further unknown. Moreover, the production term could account for any time evolution of the image I(x, y, t) without including any flow \stackrel{\u20d7}{v}\left(x,y\right), leading to the trivial solution
This problem is avoided when more than two images are included, assuming that both the velocity field and the production term are constant for all the N frames in the time window under consideration. This imposes that the motion of objects in different images is accounted for by the velocity field \stackrel{\u20d7}{v}\left(x,y\right) and the appearing, growth, depletion or extinction of objects determine the production term F(x, y).
2.3. Numerical implementation
The simplifying assumptions of the model and the random noise included in the experimental data impose that Equation 3 can apply only within some approximations. For this reason, it is not expected that the parameters of the model (i.e., \stackrel{\u20d7}{v}\left(x,y\right) and F(x, y)) can be estimated exactly, but only minimising the error with respect to the data. Given N images, the velocity field \stackrel{\u20d7}{v}\left(x,y\right) and the production term F(x, y) can be estimated optimally by solving the following mean square error problem
where {\u2225\cdot \u2225}_{2}^{2} indicates the square of the norm of the space of squareintegrable functions L^{2}, {I}_{t}^{ij}=\frac{{I}^{i}{I}^{j}}{\left(ij\right)\mathsf{\text{d}}t} the discrete version of the time derivative, with I^{i} indicating the i th reflectivity map and dt the time sampling interval, I^{ij} the radar map at the time sample \frac{i+j}{2}\mathsf{\text{d}}t (the mean of the two closest maps was used when \frac{i+j}{2} was not an integer number), and ∇I^{ij} the gradient of I^{ij} (estimated with a second order finite difference approximation). It is worth noticing that in Equation 5 all pairs of maps were considered with maximal distance equal to 3. Including more maps lowers the effect of noise. On the other hand, when considering maps with increasing delay, the finite difference approximation of model (3) is affected by an increasing error. For this reason, it is better to limit the time range of map pairs included in (5) (or, as an alternative, it is also possible to penalize the terms in the sum as a function of the delay between maps). Depending on the application and on the sampling frequency, the optimal maximal delay between maps should be properly chosen.
When time evolutions of the velocity field and of the production term are of interest, their estimation can be performed for a set of sliding time windows. Time evolutions are expected to be smooth, as the velocity field and the production term are computed assuming that they are constant for all the N maps in the considered time window.
In optical flow techniques, to avoid the aperture problem, the velocity field is also constrained to be smooth in space [11, 12]. This condition can be imposed either locally (requiring the flow to be constant in the neighbouring pixels of the considered one, Lucas and Kanede method [11]) or introducing global constraints of smoothness (HornSchunck method [12]).
2.4. Estimation of flow and production
In optical flow problems, in which the production term is not included, the brightness constancy condition together with spatial constraints are sufficient to estimate the flow even from two images. Such a flow was proven to reside in a lowdimensional linear space. Constraining it to have the correct low number of degrees of freedom, noise content in the data can be reduced and a robust estimation of the flow can be obtained [13]. Two methods for the estimation of optical flow from a multiframe analysis were recently introduced in [14] and compared to the technique proposed in [13]. The smoothness constraint was imposed locally (in line with LucasKanede approach). Performances improved as the number of processed images increased. The two methods were superior to that in [13] both in terms of computational cost and precision. The most precise method was based on the incremental difference approach, in which adjacent frames are used to estimate time derivatives, in line with Equation 5.
Both LucasKanede [11] and HornSchunck approaches [12] are here generalized to impose that the estimated velocity field and production term are smooth in space.
2.4.1. LucasKanede approach
Within LucasKanede framework, for each pixel of the image, the same equation was written for the M neighbouring pixels of the considered one. A Gaussian weighting factor (with standard deviation equal to 2\sqrt{2} pixels) was assigned to such conditions, to give more prominence to the central pixel and lower importance to more distant ones. For each pixel, the flow and the production terms were estimated in order to satisfy these multiple conditions optimally in the least square sense. Specifically, the following linear system was defined
with the following definitions of the matrix A and of the vector b
where p_{1},...,p_{ M } is the set of neighbours of the considered point, w_{1},..., w_{ M } are the weights and s labels each of the 3(N2) pairs of maps (indicated by ij in Equation 5). In this article, 25 neighbours of each point were considered (M = 25). Hence, for points located far from the boundary, the neighbours were located in a square with side 5, in line with [14]. The system (6) is overdetermined. The unknown vector X was estimated optimally in the least square sense by pseudoinversion (which provides a close analytical solution to the problem).
2.4.2. HornSchunck approach
Following the HornSchunck method [12], the smoothness constraint is introduced by adding the energy norm of the gradients of the velocity flow and of the production term as regularization components in Equation 3. Other constraints can be included, in order to introduce apriori knowledge on the solution. For each map pair considered in Equation 5 (here labelled with s), the following functional to be minimized was considered
where {\alpha}^{2}{\u2225\nabla \stackrel{\u20d7}{v}\u2225}_{2}^{2} is the HornSchunck smoothness constraint, {\alpha}^{2}{\u2225\nabla F\u2225}_{2}^{2} is an equivalent constraint for the production term, {\beta}^{2}{\u2225F\stackrel{\u20d7}{v}\u2225}_{2}^{2} reduces the correlation between flow and production term (to force production and propagation terms to be present in different regions), {\gamma}^{2}{\u2225F\u2225}_{2}^{2} and {\delta}^{2}{\u2225\stackrel{\u20d7}{v}\u2225}_{2}^{2} limit the amplitude of the two unknowns (Tikhonov regularization, [25]), in order that they do not become large to follow noise details.
The functional (8) can be minimized by solving the associated EulerLagrange equations [26]
where Δ indicates the Laplacian operator. As proposed in [12], an iterative technique (Jacobi's method) was applied to solve the system of equations (9). The nonlinear terms F^{2}v_{1}, F^{2}v_{2}, and F\left({v}_{1}^{2}+{v}_{2}^{2}\right) were estimated from the previous step in the iteration. The Laplacian was expressed as
where \overline{U} is an average value estimated from the previous step in the iteration. As time is sampled, Equations 9 were written for each pair of maps considered to estimate the time derivative (refer to Equation 5). The following linear system of equations was obtained for the n th step of the iteration, for each pair of maps
An estimation of the unknowns optimal in the least square sense was obtained by pseudoinverting the rectangular matrix containing the conditions (11) for each considered pair of maps. In order to facilitate convergence to smooth solutions, the flow and the production term estimated at each step of iteration were convolved with the Gaussian mask
The following expressions were chosen for the parameters in Equation 8
where n is the number of iteration, rms(∇I) is the average root mean square (RMS) of the gradients of the images and \mathsf{\text{rms}}\left({\stackrel{\u20d7}{v}}^{n1}\right) is the RMS of the vector flow estimated in the previous step of the iteration. As the iterations proceed, the values of the parameters decrease giving more importance to the fitting of data.
The fit of the model to the data was measured by the RMS error of the model with respect to the data normalized with respect to the norm of the data itself. The RMS error was defined considering the entire map pairs included, as follows
The algorithm proceeded as long as such an RMS error decreased. When the RMS error increased with respect to the previous step in the iteration, the algorithm was stopped and the estimated flow and production term at the previous step (i.e., the ones for which the RMS was minimum) were considered.
3. Results
The performance of the methods in tracking objects moving and growing in subsequent images was first tested in simulations. Then, some representative examples of application to radar data are shown.
3.1. Simulated data
The reliability of the algorithms in tracking the motion and the growing up of 2D Gaussian functions was tested. In a preliminary test shown in Figures 1 and 2, two Gaussian functions were simulated to follow straight intersecting paths, whereas one Gaussian function was growing at a rate of 0.1 per time sample. Specifically, the definitions of the three Gaussian functions are the followings
where G_{ i } (x, y, t) indicates the i th Gaussian function, the first two propagating (at velocities (v_{ x } ± v_{ y } )), the third one remaining stationary, but growing at constant rate; moreover, \left({x}_{i}^{0},\phantom{\rule{0.3em}{0ex}}{y}_{i}^{0}\right) with i = 1, 2, 3 indicates the initial position of the i th function. The initial conditions and the standard deviation (σ = 2) of the three functions were chosen in order that their essential supports were separated in all considered images, except when the trajectories of the first two functions intersect (see Figure 1A). Additive Gaussian noise was included with signal to noise ratio (SNR) equal to 10 dB. No smoothing was performed before processing, even if the computation of numerical derivatives would improve by low pass filtering the images. Time was sampled with 16 images. It is worth noticing that the problem is not well posed, as the flow cannot be estimated in the points in which the first two Gaussian functions intersect.
Three experiments including a different number of images were performed. The initial and the final images were always considered. The other images were undersampled by a factor 5 (N = 4 images considered) or 3 (N = 6), or all of them (N = 16) were used to estimate the propagation and growth of the three Gaussian functions. Figure 1 shows results obtained using the algorithm based on the HornSchunck approach. Results indicate that using the minimum considered number of images (N = 4), the flow cannot be estimated: in such a case, the production term accounts for the disappearing of the first two Gaussian functions from their initial positions and their appearance in the final positions, with some contribution along their paths. Moreover, the growing of the third Gaussian function is identified. When increasing the number of images to 6, a local flow is estimated close to the initial and final positions of the first two Gaussian functions. The paths of the estimated flow are noisy and not straight. Moreover, the production term includes both the estimate of the growth of the third Gaussian function and some contribution along the paths of the two travelling ones. Including all the images, the estimation of the flow and of the production term is clear: the flow paths are straight and go from the initial to the final positions of the first two Gaussian functions; the production of the third Gaussian function is correctly estimated; the only residual problem is in the region in which the paths of the first two Gaussian functions intersect, but in such a region the problem is not well posed, as stated above.
Figure 2 shows a comparison between the two implemented algorithms, considering the same simulations as in Figure 1, using 16 frames. Both LucasKanede and HornSchunck approaches allows identifying the flow and the production term, with similar results (similar results are also obtained using the two different approaches with a lower number of frames).
Different sets of simulated signals are considered in Figure 3 to investigate the performances of the algorithms in estimating the flux as a function of the modulus and direction of the propagation velocity and of the energy of additive random noise. A single propagating Gaussian function defined by an expression equal to that of G_{1}(x, y, t) in Equation 15 was considered (Figure 3A). Its motion was sampled by 10 images.
The algorithms were applied either to the whole set of images (N = 10), or to a subset obtained undersampling by a factor 3 (N = 4). The same initial and final images were used (as shown in Figure 3A, lower panel). Ten realizations of Gaussian noise were added over the maps, with SNR of 10 or 20 dB in different sets of simulations. From each processed set of images, a single velocity vector was computed from the estimated flow, by averaging the flow vectors in the region in which the propagating Gaussian function was larger than the threshold 0.75 in at least one of the processed maps. The estimated modulus of the velocity vector is shown in Figure 3B as a function of the simulated modulus of velocity, superposing curves corresponding to different angles obtained averaging with respect to the noise realization and indicating the standard deviation (STD). In general, there is not an important effect of the angle on the estimated modulus of velocity.
Good estimates are obtained by both methods when the whole set of images is considered (Figure 3B2, B4). Using a small number of images (N = 4), the velocity can be estimated only if it is small (Figure 3B1, B3). Indeed, the estimates of the derivatives of the images with respect to the time and space variables are accurate only if the displacement is small (and only under the same condition the optical flow equations are justified [14]). Moreover, in the LucasKanede approach, only small neighbours of each point are explored, so that the pairs of images contributing to the definition of matrix A in Equation 6 could be not correlated within such small regions as the displacement of objects in different images is too large. On the other hand, HornSchunck method is based on global constraints and the EulerLagrange Equations 9 have a diffusion operator which contributes to coupling neighbouring points. It provides reliable estimates up to velocities of about 1.5 pixels per time sample. Thus, a proper choice of parameters (e.g., the extension of the neighbouring region in the LucasKanede approach or the diffusion coefficient α^{2} in the HornSchunck approach) could help in following fast movements (as occurring when convective cells are present in the radar images). Nevertheless, increasing the sampling frequency is the best solution (e.g., in [14] different methods were tested on a synthetic sequence manually generated by moving the images of the sequence with the flow vector (0.1, 0.1) pixel/frame). Estimates of the angle of the velocity are depicted in Figure 3C as a function of the modulus and the angle of the simulated velocity, showing mean and STD of the estimates obtained with different realizations of noise. The direction of propagation is poorly estimated using a small number of images, even with a high SNR. The estimates are much more stable and precise when the number of images increases.
It is worth noticing that, as only propagation was simulated in this case, algorithms for optical flow available in the literature could also be applied. The LucasKanede algorithm for optical flow estimation, without including the production term, can be obtained substituting the Equations 6 and 7 with the following
where the matrix A and the vector b are defined as
HornSchunck algorithm for optical flow estimation, without including the production term, can be obtained substituting Equation 11 with the following
In general, their results are expected to be better than those obtained using the methods introduced here, in particular when the number of frames is small. Indeed, the new algorithms considered here have an additional degree of freedom (the production term) with respect to classical optical flow methods. Thus, for sets of images related only by flow, they need more information to learn that the production term is absent. Nevertheless, with the simulations considered here, the results are comparable, as shown in Table 1, where the errors in estimating velocity and direction of the flow are indicated for the four methods (HornSchunck and LucasKanede, including or excluding the production term), for each considered pair of values of N and SNR. We can notice that the estimate of the modulus of the velocity is marginally affected by the intensity of the noise, whereas the estimation of the angle is less precise when the noise content increases.
For all the simulations considered in this paper, the number of iterations required by the Horn Schunck algorithm to converge was about 5 to 10 and the RMS error in fitting the data (defined in Equation 14) was between 5 and 15% for all the methods considered.
3.2. Application to experimental data
As an example of application, the meteorological conditions during the night (from 23:00 to 7:00) between the 20th and the 21th of November, 2010 were considered. Rainfall rate was estimated using data detected from the Xband radar described in Section 2.1 and shown in Figure 4A, placed on the roof of Politecnico, close to the centre of Turin. The first considered map of rainfall rate is shown in Figure 4B. The spikes are associated to clutters. Data were resampled in order to convert the polar coordinates into Cartesian ones, with homogeneous sampling with resolution 500 m. Moreover, maximum rainfall rate considered was 10 mm per h. Experimental values larger than such a limit (assumed to correspond to clutters) were removed and their value was computed by linear interpolation. A square region centred 15 km at East of the centre of Turin and with side 20 km was considered (Figure 4C). Before processing, experimental noise was reduced by a spatial low pass filter obtained by 2D convolution with a Gaussian mask with standard deviation equal to 500 m, as shown in Figure 4D.
The case study concerns a stratiform rain fallen on Turin. From the meteorological analysis, low pressure in the South of France entailed a cyclone circulation: wind fields at 500 hPa (height of clouds responsible of precipitation) move from South to North in Northern Italy, as noticeable from MetOffice pressure map (Figure 5A) and CuneoLevaldigi radiosounding station near Turin (Figure 5B).
Figure 6 shows two examples of estimation of the flow and production of rainy clouds by the proposed algorithm based on the Horn and Schunck approach. The square region on the East of Turin shown in Figure 4C, D was studied. Two time ranges were considered: 10 min sampled by 10 radar maps (Figure 6A, B) and 8 h sampled by 17 maps of cumulative rainfall rate, each obtained adding sampled images for 30 min (Figure 6C, D). The estimated flows are shown on the left (Figure 6A, C). The flow averaged over the longer time range (Figure 6C) is predominantly directed upward, in NorthEast direction, in agreement with the indications of the MetOffice pressure map (Figure 5A). On the other hand, the flow estimated from the radar maps recorded during the specific short time period shown in Figure 6A is predominantly directed downward, in South direction, opposite to the indications of the MetOffice pressure map and to the average flow estimated processing a time period covering the majority of the event (Figure 6C). Moreover, the motions of clouds appear to be more turbulent and discontinuous (i.e., with large spatial variations) when a short time range is considered. The estimated productions of rainy clouds are shown on the right of Figure 6 (in 6B and 5D). The production is lower when the time range is larger, probably due to the low pass filter effect of cumulating the rainfall rate. Such a procedure has the effect of smoothing out the differences between successive maps, which are identified by the algorithm as production or extinction of clouds (if they are not moving).
All radar maps presented a still local maximum, which is a clutter, centred about 19 km East and 5 km North of Turin. The estimated flow and production term vanish close to such a region.
The number of iterations required by the algorithm to converge was about 5 to 10 when processing 10 maps sampled every minute. In such a case, the RMS error (defined in Equation 14) was between 5 and 10%. Processing the 17 maps of rainfall rate cumulated every 30 min required 12 iterations; the RMS error in fitting the data was 1.2%.
4. Discussion and conclusions
Two innovative algorithms are proposed to track rainy clouds motion and to identify their generation and loss. The methods are based on the classical LucasKanede [11] and HornSchunck [12] optical flow techniques, but estimate also a production term accounting for the appearance, growing, depletion, or extinction of objects in subsequent images. This requires the inclusion of more than two images in the processing.
The two methods have comparable performances when applied to simulated signals (Figures 2 and 3). Moreover, when applied to a set of images satisfying the bright constancy condition, their performances are similar to those of multiframe versions of classical optical flow techniques (Table 1).
An important parameter affecting performances is the number of images processed by the algorithms (Figures 1 and 3). Including an increasing number of images makes the results more and more stable to random noise. Indeed, a single flow and a single production term are estimated out of more noisy images, extracting average properties (i.e., the motion and the generation of objects) which consistently appear in different images, and reducing the effect of random fluctuations. Flow and production term estimated from sets of simulated images with constant flow and generation improved when the number of processed images increased (Figures 1 and 3 and Table 1). On the other hand, increasing the number of images keeping constant the sampling interval increases the time range of investigation. As the algorithms assume that the flow and the generation of objects do not change within the processed images, increasing the investigation interval reduces their capability to detect rapid variations developing at a shorter time scale. Thus, a proper oversampling related to the time scale of the phenomenon of interest is needed for a correct application of the methods. Oversampling is more important for the LucasKanede approach, as the smoothness constraint is imposed locally.
The methods were implemented in Matlab, on a Pentium(R) DualCore, with clock frequency of 2.8 GHz, 4 GB of RAM and 64 bits operating system, using routines running on a single core. The computational cost is shown in Figure 7 in terms of the processing time as a function of the number of frames, in the case of images with different dimensions (notice the erratic processing time corresponding to the HornSchunck approach, which depends on the number of iterations needed to converge). The computational cost increases rapidly with the dimension of the frame and with the number of images considered. The inclusion of the production term increases marginally the processing time. The algorithms are feasible for parallel implementation, as the same steps should be repeated for each pixel in the image. Moreover, using an optimized implementation and a compiled language could significantly reduce the processing time. Thus, processing about 10 to 20 frames with dimension similar to those considered here, the algorithms could process data in (quasi) real time. Nevertheless, this is not strictly needed for the specific application on meteorological nowcasting.
The algorithm generalizing the HornSchunck approach includes many parameters which could be properly chosen in order to fit the specific application. Four parameters give to the user the possibility to weight properly some constraints which facilitate the convergence of the algorithm toward a solution with smoothness properties defined apriori. In this work, such parameters were fine tuned by a trial and error approach. A slow reduction of the parameters for each iteration of the algorithm provided reliable results.
A representative example of application to rainy clouds tracking is shown in Figures 4 to 6. A stratiform rainfall event with main stream from South to North was investigated. When a long portion (8 h long) of the event was investigated using images of cumulated rainfall rate, the main direction of the estimated flow agreed with the indications of the MetOffice pressure map. Nevertheless, it was possible to identify a local flow in the opposite direction when a shorter time range (10 min long) was considered. Moreover, dynamics were more irregular in space when the investigated time range was shorter. These experimental observations are in line with the nonlinear and chaotic dynamics characterizing the meteorological variables [20]. Indeed, the equations governing the temporal evolution of weather are a series of partial differential equations with chaotic solutions, showing selfsimilar (fractal) geometry [27]. This implies that similar variations of flow can be observed at different spatial scales (becoming more and more complicated and discontinuous as the spatial scale is reduced). Moreover, similar past events could evolve into very different conditions, as already noted in [28]. Indeed, small differences in the conditions measured in a point can be amplified by the nonlinear dynamics. The Lyapunov exponent is a measure of the rate of exponential divergence of trajectories starting from neighbouring points [29]. When forecasting weather dynamics, prediction horizon is usually very short, especially in unstable conditions, and related to the inverse of the Lyapunov exponent [30] characterizing the chaotic system.
As a consequence of the chaotic and fractal behaviour of the weather system, even when it appears to be stable and predictable on a large spatial scale, in some restricted regions the meteorological conditions can suddenly change [31]. The local, short time range variations of estimated flow shown in Figure 6A constitute an example of motion which is mainly in opposite direction with respect to the average stream, dictated by the pressure distribution (Figure 5A) and in agreement with the local flow estimated on a long time range (Figure 6C). The goal of local weather nowcast is to provide precise predictions of the intensity, location, onset, and extinction of significant events. Both time and space scales must be sampled at high resolution. The proposed algorithm, together with a technique to extend the estimated flow, could be used to perform local nowcast, even though the time range of the reliable prediction could be limited by the short spatial range of investigation, in particular in case of convective precipitations. Other methods to perform local predictions are based on the analysis of a few timeseries representing meteorological variables measured in the location in which the forecast is of interest. The prediction is not based on a simple linear extension of the present conditions, but on a nonlinear algorithm comparing the actual state with similar ones found in the past [29, 31]. However, local variables usually contain poor information about the stability or instability of weather conditions, which is important to perform a reliable forecast. Indeed, predicting the onset or the duration of rainy events from local measurements is very difficult, as it requires identifying the transition between two completely different weather states. The task could benefit from some specific indication extracted from a short range investigation indicating if the weather is stable or not. The algorithm presented here could provide information about the presence of rainy clouds in the vicinity of the local position of interest, their movements (possibly related to the wind at high altitude), which could be turbulent or stationary, and cloud formation and growing. All this information could be converted into a set of scalar timeseries feeding a predictor model, together with the other time series already used, in order to improve the reliability of the forecast.
Abbreviations
 RMS:

root mean square
 SNR:

signal to noise ratio
 STD:

standard deviation
 2D:

two dimensional.
References
Giangrande SE, Ryzhkov AV: Polarimetric method for bright band detection. Proceedings of the 11th Conference on Aviation, Range, and Aerospace 2004.
Schmidt J, Turek G, Clark MP, Uddstrom M, Dymond JR: Probabilistic forecasting of shallow, rainfalltriggered landslides using realtime numerical weather predictions. Nat Hazards Earth Syst Sci 2008, 8: 349357. 10.5194/nhess83492008
Klein A, Kavoussi S, Lee RS: Weather forecast accuracy: study of impact on airport capacity and estimation of avoidable costs. Eighth USA/Europe Air Traffic Management Research and Development Seminar 2009.
Gabella M, Orione F, Zambotto M, Turso S, Fabbo R, Pillon A: A portable low cost Xband RADAR for rainfall estimation in Alpine valleys: Analysis of RADAR reflectivities and comparison between remotely sensed and in situ measurements; Test of a prototype in Friuli Venezia Giulia. FORALPS Technical Report 2008., 3:
Delrieu G, Andrieu H, Creutin JD: Quantification of pathintegrated attenuation for X and Cband weather radar systems operating in Mediterranean heavy rainfall. J Appl Meteorol 2000, 39: 840850. 10.1175/15200450(2000)039<0840:QOPIAF>2.0.CO;2
Park SG, Bringi VN, Chandrasekar V, Maki M, Iwanami K: Correction of radar reflectivity and differential reflectivity for rain attenuation at × band. Part I: theoretical and empirical basis. J Atmos Oceanic Tech 2005, 22: 16211632. 10.1175/JTECH1803.1
Gabella M, Notarpietro R, Turso S, Perona G: Simulated and measured Xband radar reflectivity of land in mountainous terrain using a fanbeam antenna. Int J Remote Sens 2008,29(10):28692878 (2008). 10.1080/01431160701596149
Bellon A, Austin GL: The evaluation of two years of realtime operation of a shortterm precipitation forecasting procedure (SHARP). J Appl Meteorol 1978,17(12):17781787. 10.1175/15200450(1978)017<1778:TEOTYO>2.0.CO;2
Johnson JT, Mackeen PL, Witt A, Mitchell ED, Stumpf GJ, Eilts MD, Thomas KW: The Storm cell identification and tracking algorithm: an enhanced WSR88D algorithm. Weather Forecast 1998, 13: 263276. 10.1175/15200434(1998)013<0263:TSCIAT>2.0.CO;2
Parsons DB, Hobbs PV: The mesoscale and microscale structure of cloud and precipitation in midlatitude cyclones, XI, comparison between observational and theoretical aspects of rainbands. J Atmos Sci 1983, 40: 23772397. 10.1175/15200469(1983)040<2377:TMAMSA>2.0.CO;2
Lucas BD, Kanade T: An iterative image registration technique with an application to stereo vision. Proceedings of Imaging Understanding Workshop 1981, 121130.
Horn BKP, Schunck BG: Determining optical flow. Artif Intell 1981, 17: 185203. 10.1016/00043702(81)900242
Irani M: Multiframe optical flow estimation using subspace constraints. Proceedings of the Seventh IEEE International Conference on Computer Vision 1999, 1: 626633.
Wang CM, Fan KC, Wang CT: Estimating optical flow by integrating multiframe information. J Inform Sci Eng 2008, 24: 17191731.
Bowler NEH, Pierce CE, Seed A: Development of a precipitation nowcasting algorithm based upon optical flow techniques. J Hydrol 2004, 288: 7491. 10.1016/j.jhydrol.2003.11.011
Peura M, Hohti H: Optical flow in radar images. European Conference on Radar in Meteorology and Hydrology (ERAD2004), Copernicus 2004, 454458.
Wilson JW, Crook NA, Mueller CK, Sun J, Dixon M: Nowcasting thunderstorms: a status report. Bull Am Meteorol Soc 1998,79(10):20792099. 10.1175/15200477(1998)079<2079:NTASR>2.0.CO;2
Pasero E, Moniaci W, Meindl T, Montuori A: NeMeFo: Neural meteorological forecast. SIRWEC 12th International Road Weather Conference, Bingen (EUD), Paper Topic II4 2004.
Mesin L, Orione F, Pasero E: Nonlinear adaptive filtering to forecast air pollution. In Adaptive Filtering. Edited by: Morales LG. Intech, Rijeka, Croatia; 2011.
Tsonis AA: The impact of nonlinear dynamics in the atmospheric sciences. Int J Bifurc Chaos 2001, 11: 881902. 10.1142/S0218127401002663
Marshall JS, Palmer WM: The distribution of raindrops with size. J Meteor 1948, 5: 165166. 10.1175/15200469(1948)005<0165:TDORWS>2.0.CO;2
Doelling IG, Joss J, Riedl J: Systematic variations of ZR relationships from drop size distributions measured in northern Germany during seven years. Atmos Res 1998, 4748: 635649.
Limb JO, Murphy JA: Estimating the velocity of moving images in television signals. J Clin Neurophysiol 1975, 4: 311327.
Fleet DJ, Weiss Y: Optical flow estimation. In Mathematical models for Computer Vision: The Handbook. Edited by: Paragios N, Chen Y, Faugeras O. Springer, New York, USA; 2005:239258.
Engl HW, Hanke M, Neubauer A: Regularization of Inverse Problems. Kluwer, Dordrecht; 1996.
Courant R, Hilbert D: Methods of Mathematical Physics. Interscience, New York; 1953.
Lovejoy S, Mandelbrot BB: Fractal properties of rain, and a fractal model. Tellus 1985, 37A: 209232. 10.1111/j.16000870.1985.tb00423.x
Lorenz EN: Deterministic nonperiodic flow. J Atmos Sci 1963,20(2):130148. 10.1175/15200469(1963)020<0130:DNF>2.0.CO;2
Kantz H, Schreiber T: Nonlinear Time Series Analysis. Cambridge University Press, Cambridge. United Kindom; 1997.
Haykin S: Neural Networks: A Comprehensive Foundation. Pearson Education, Dehli, India; 1999.
Pasero E, Moniaci W: Artificial neural networks for meteorological nowcast. In Proceedings of CIMSA 2004, IEEE Conference on Computer Intelligence for Measurement Systems and Applications. Boston, MA, USA; 2004:3639.
Acknowledgements
This work was sponsored by the national project AWIS (Airport Winter Information System), funded by Piedmont Authority, Italy. The author is deeply indebted to Riccardo Notarpietro and Andrea Prato for their interesting comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Mesin, L. Short range tracking of rainy clouds by multiimage flow processing of Xband radar data. EURASIP J. Adv. Signal Process. 2011, 67 (2011). https://doi.org/10.1186/16876180201167
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/16876180201167
Keywords
 Xband radar
 optical flow
 nowcasting