Skip to main content
  • Research Article
  • Open access
  • Published:

Object Association and Identification in Heterogeneous Sensors Environment

Abstract

An approach for dynamic object association and identification is proposed for heterogeneous sensor network consisting of visual and identification sensors. Visual sensors track objects by a 2D localization, and identification sensors (i.e., RFID system, fingerprint, or iris recognition system) are incorporated into the system for object identification. This paper illustrates the feasibility and effectiveness of information association between the position of objects estimated by visual sensors and their simultaneous registration of multiple objects. The proposed approach utilizes the object dynamics of entering and leaving the coverage of identification sensors, where the location information of identification sensors and objects is available. We investigate necessary association conditions using set operations where the sets are defined by the dynamics of the objects. The coverage of identification sensor is approximately modeled by the maximum sensing coverage for a simple association strategy. The effect of the discrepancy between the actual and the approximated coverage is addressed in terms of the association performance. We also present a coverage adjustment scheme using the object dynamics for the association stability. Finally, the proposed method is evaluated with a realistic scenario. The simulation results demonstrate the stability of the proposed method against nonideal phenomena such as false detection, false tracking, and inaccurate coverage model.

1. Introduction

Recently, heterogeneous sensor network has received much attention in the field of multiple objects tracking to exploit advantages of using different modalities [1, 2]. Visual sensor is one of the most popular sensors due to its reliability and ease of analysis [3–5]. However, the visual sensor-based tracking system is limited only to recording the trajectory of objects because visual sensors have several limitations for object identification [6–9]. One of the main difficulties for the visual sensor-based object tracking is that distinguishable characteristics of the objects are nontrivial to be constructed for all the detected targets due to the objects' similarity in color, size, and shape. Moreover, accurate feature extraction is not always guaranteed. Therefore, identifying an object with features is a challenging problem. Also, several identification sensors, such as RFID (Radio Frequency Identification) system, fingerprint, or iris recognition system, have been utilized for object identification. However, the functionality of these sensors is limited only to the object identification and they are difficult to be used for the object tracking [10–12]. They can only alarm human operators for events triggered by identification sensors but cannot make intelligent decisions for them. For example, they cannot monitor the movement pattern of authorized people in special areas. Therefore, an identification sensor can only complement the visual sensor-based tracking system for the intelligent surveillance system.

There have been some related works regarding the issue of surveillance using heterogeneous types of sensors. The specific issues considered are various such as heterogeneous data association and efficient network architecture. Schulz et al. [13] proposed the method to track and identify multiple objects by using ID-sensors such as infrared badges and anonymous sensors such as laser range-finders. Although the system successfully associates the anonymous sensor data with ID-sensor data, the transition of the two phases is simply done by the heuristic of the average number of different assignments in Markov chains. Moreover, it does not provide a recovery method against losing the correct ID and the number of hypotheses grows extremely fast whenever several people are close to each other. Shin et al. [14] proposed the network architecture for a large-scale surveillance system that supports heterogeneous sensors such as video and RFID sensors. Although the event-driven control effectively minimizes the system load, the paper does not deal with the association problem of heterogeneous data but only the mitigation of the data overload. Cho et al. [15, 16] proposed the heterogeneous sensor node with an acoustic and RFID sensor where the coverage of an acoustic sensor is identical to the coverage of an RFID sensor. The association of the estimated position and the identification of an object is achieved by using a simple association rule that one and only one identification is registered within the coverage of the sensor node while its corresponding position is estimated within the coverage of the sensor node. The performance of these approaches, however, can be significantly degraded by the coverage uncertainty of the acoustic and RFID sensors. The coverage uncertainty is caused by the characteristics of acoustic and RFID signal. The system cannot accurately calibrate the time-varying coverage of those sensors. Moreover, multiple objects near the boundary of the sensor coverage may obscure the object identification by identification sensors and the object localization by acoustic sensors. Therefore, an effective association algorithm is needed which can manage the inconsistent registrations of identifications.

In this paper, we present an approach for dynamic object identification in heterogeneous sensor networks where two functionally different sensors are incorporated. Visual sensors associate objects and track them using the geometric relationship of multiple cameras [17, 18]. The visual sensor-based tracking system is assisted by identification sensors in identifying the estimated positions of objects. The coverage of identification sensors is assumed by its maximum sensing coverage and the association system applies the simple association strategy for the estimated position from the visual sensor and the identification from the identification sensor. The important issue in heterogeneous sensor networks is to provide the association system with a common reference information fusing heterogeneous data. The visual sensors-based tracking system utilizes the known coverage of the identification sensors to associate the heterogeneous data. The locations of identification sensors are known and they are jointly used with the locations of objects to check the object dynamics of entering and leaving the sensor coverage. The sets of estimated positions and identifications are defined for the coverage of each identification sensor. The association of them is established by checking the temporal change of the sets. In order to solve the association problem with the coverage uncertainty issue, a group and incomplete group associations are introduced. The group and incomplete group associations enable the association system to maintain identification candidates for the corresponding estimated positions until a single association is established. Also, a group association can stabilize the association performance against the inconsistent registration of identifications by an identification sensor. Additional association cases are investigated to increase the association performance by checking the object dynamics. We also identify more association problems with the discrepancy between the actual coverage by the identification sensor and the approximated coverage by the visual sensor and present a coverage adjustment scheme using the object dynamics. Finally, the proposed association method is evaluated with a realistic scenario and is analyzed to show the stability of the proposed method according to degree of the discrepancy between approximated and actual identification sensor coverage, variance of actual identification sensor coverage, and tracking performance.

The remainder of this paper has 4 sections. In Section 2, we present the overview of an application model and problem descriptions. Section 3 explains an association method for multiple objects by a group association and incomplete group association with the consideration of the coverage uncertainty problems. In Section 4, the proposed method is evaluated with a realistic application scenario and is analyzed with nonideal problems such as the discrepancy between approximated and actual identification sensor coverage and variance of actual identification sensor coverage. Finally, the paper is summarized in Section 5.

2. Application Model and Problem Description

2.1. Application Model

Heterogeneous sensor network in Figure 1 consists of two types of sensors: one is a visual sensor and the other is an identification sensor (e.g., check-in at the airport is equivalent to the identification by an identification sensor). While it is assumed that identification sensors operate correctly, they can be classified into two types in terms of the coverage issue in the proposed approach: one is an RFID-type ID sensor and the other is a non-RFID-type ID sensor. When non-RFID-type ID sensors are used for object identification, the effect of the coverage uncertainty is minimized since they usually identify a single object at one time. However, they usually require the long processing time to extract and analyze the features of a target. On the other hand, while RFID-type ID sensors have the benefit of the short processing time to identify objects, they suffer from the effect of the coverage uncertainty (i.e., multiple objects can be registered simultaneously in the uncertain coverage of RFID-type ID sensors). Objects emit the radio signal to the RFID-type sensors, and the effect of the coverage uncertainty is maximized with the RFID-type sensors. With respect to practical issues, object collision and tracking failure are common problems with both RFID-type and non-RFID-type sensors and coverage uncertainty is only for the RFID-type-sensor problem. The main problem is how to achieve data association of position information by a visual sensor with identification information by an identification sensor under these issues. In an ideal situation, it can only be a simple engineering task that one registered ID is associated with one estimated position within the coverage of an identification sensor. However, ID assignment becomes a nontrivial problem when objects are densely populated in the surveillance region; therefore, the simple ID assignment cannot be achieved due to frequent collisions between objects or simultaneously entering objects. A collision between the objects can lead to a failure in tracking objects since they are too close to be differentiated for position and ID assignments.

Figure 1
figure 1

Example of an application model with heterogeneous sensors of visual sensors and identification sensors.

The proposed approach can be applied to not only public areas (e.g., schools, hospitals, and shopping malls) but also highly secured areas (e.g., airports, military facilities, and government organizations). As an example of possible scenarios, serious offenders with attached ID tags can be tracked with the proposed method in order to ensure the safety at public places in cities. Also, the surveillance system with the proposed approach can keep tracking passengers in an airplane check-in or military personnel in a special area. It assumes that each object has its own identification such as an RFID tag, fingerprint, and iris. Identification sensors are usually installed at the gates of restricted areas, and a visual sensor tracks objects. For the airport application, the check-in counter can play the role of the ID-sensor. Whenever an object goes across the gates, the registered ID by an identification sensor is associated with the position estimated by a visual sensor. The system continuously watches the surveillance region by checking authorized IDs in the restricted areas.

Figure 2 shows the architecture of the association system that we consider in this paper. Visual sensors continuously detect and track objects by various techniques [19–21]. In order to find the corresponding targets of objects among multiple cameras, locally initiating homographic line method is used [17]. Objects are localized by a simple 2D localization algorithm in [18]. On the other hand, identification sensors register identifications of objects within their own coverage. The association of an object at time is defined as

(1)

where is the estimated 2D position from the visual sensor and is the identification obtained from the identification sensor.

Figure 2
figure 2

Illustration of the overall architecture of the proposed heterogeneous sensor system using visual sensor and identification sensor.

Figure 3 shows an association approach of two different types of signals to identify the estimated position of an object. Let denote the set of objects' positions inside the coverage of the identification sensor at time , where the actual coverage radius of identification sensor is in the ideal case. Since the association system knows the locations of identification sensors and the estimated positions of objects, it can check whether an object is within the coverage of the identification sensor by using the distance between them. Define the set of objects' positions within the coverage of the identification sensor but not associated with an at time as

(2)

where denotes the location of the identification sensor and is the distance between and . is the indicator function, where means that the estimated position does not have an associated identification. Note that is the maximum radius of an identification sensor where there are positions of objects while there are objects. As the actual coverage of an identification sensor can vary and a visual sensor tracks the objects for the radius of , can be different from . Similarly, is defined as the set of identifications not being associated but registered by the identification sensor.

Figure 3
figure 3

Example of an association and identification with an ideal sensor coverage.

The simple association condition for a single object is given by

(3)

where represents the number of elements in the set [15, 16]. In other words, for an identification sensor at a time instance, if there is one unassociated (from identification sensor) and one unassociated object position (from visual sensor), the association can simply be made. However, in practical applications, the condition in (3) may not be satisfied.

2.2. Problem Description

The association problems can be nontrivial, especially when RFID-type identification sensors are used. For those types of sensors, as they are based on the reception of the radio frequency signal, which can be easily distorted by the environment, the coverage of the sensor can become time-varying without being known to the visual sensor. Then, the actual coverage of an identification sensor can be different from the approximated coverage by a visual sensor and the condition in (3) may not be satisfied—there are more than one unassociated objects' positions but fewer number of unassociated ID's, or vice versa. Even if the coverage of the identification sensor is not time-varying, there can still be the coverage uncertainty problem, when objects are densely populated near the boundary of the coverage. In order to adapt to the time-varying coverage of the identification sensor, the maximum sensing coverage of the identification sensor can be assumed by the visual sensor.

Violation of the condition in (3) can happen due to the coverage discrepancy between the sampling intervals of two sensors. For example, an registered during one sampling interval of the visual sensor can be associated with multiple estimated positions within the coverage of an identification sensor. An ideal situation for an association is that one and only one is registered during one sampling interval of the visual sensor and one position is newly added and estimated at each sampling time within the coverage of an identification sensor. However, the registration of identifications within the approximated coverage of the visual sensor is not always guaranteed due to the coverage uncertainty. Identifications may not be registered sequentially as multiple objects enter the approximated coverage of the visual sensor. Also, the registration times of identifications may not coincide with the estimation time of the corresponding positions. Then, it is difficult to associate identifications with estimated positions by using only the simple association condition in (3).

The association problems become more difficult when objects with and without identifications coexist. Especially, when there is the coverage uncertainty issue, the association system cannot clearly determine whether an object has an or not. The deterministic association approach by one-to-one assignment may falsely associate identifications with unassociated estimated positions. Moreover, the association system may switch ID's while tracking multiple objects when objects collide with each other. Therefore, the association system requires an effective association algorithm that can recover association failures by managing the coverage uncertainty.

3. Association and Identification with Coverage Uncertainty

3.1. Multiple Objects Association

3.1.1. Association without Coverage Uncertainty

Even when the coverages of the identification sensor and the visual sensor are identical, the association failure, the violation of the condition in (3), can happen mainly due to the two reasons—the simultaneous entrance and the collision. When multiple objects simultaneously enter the coverage of the identification sensor, the condition in (3) is not satisfied, since multiple objects are registered during a single sampling time of the visual sensor and . As investigated in [15], increasing the sampling time of the visual sensor can alleviate the problem, but it cannot be the fundamental solution to the simultaneous entrance problem. A collision between the objects can lead to a failure in tracking objects since they are too close to be differentiated for position and ID assignments. Although the visual sensor can track multiple objects after the collision, the associations between the objects and the ID's are no longer valid. If the dynamic transition model of objects is known, an identification assignment can be estimated through the tracking. However, the accurate model is not always known to the association system. The existing method shown in [13, 15] waits for a new association until the association-failure objects enter the coverage of a new identification sensor. Although this method can provide an association recovery, all the established associations are lost by the collision.

In order to efficiently deal with the association failures, a group association can be used. It can be initiated by the simultaneous entrance or the collision. Consider the set of association groups and each group is defined by

(4)

where and are the set of positions and the set of identifications, respectively, for group association index at time , and is the number of group associations for an identification sensor. A group association within the coverage of an identification sensor is established by

(5)

In other words, for an identification sensor at a time instance, if there are more than one unassociated ID's (from the identification sensor) and the same number of unassociated object positions (from the visual sensor), then a group association can be made.

Once multiple objects are associated as a group with the same number of identifications, they are considered to have associated identifications, but still included in the set and . Suppose that and are associated with and as a group by the simultaneous entrance or a collision. If a newly estimated position, , is not associated with any identification, a different identification from and , say , is registered in the sensor coverage, then a newly registered identification is associated with the estimated position by

(6)

which is the condition of association, modified from the condition in (3). Although the condition in (6) establishes a single association for a newly added object, such a single association cannot be established for an object in a group association by the condition in (6).

When there are multiple objects inside the coverage, the association system can utilize the object dynamics of entering or leaving the coverage to establish a single association for an object in a group association. The association condition for an entering object at the coverage of an identification sensor is

(7)

and for a leaving object at the coverage of an identification sensor, the condition is

(8)

These conditions in (7) and (8) can be extended to associate multiple objects in group associations with their own identifications. If the estimated position is in a group association, this can be differentiated from the added positions which are not in a group association. Suppose that is the set of positions of and is the set of identifications corresponding to . Then, the conditions in (7) and (8) for entering and leaving objects are modified to

(9)
(10)

respectively. A group association is divided into single association(s) or other group associations by these conditions.

3.1.2. Effects of Coverage Uncertainty

The entering or leaving condition in the group association can only be satisfied when the coverages of the identification sensor and the visual sensor are identical. The discrepancy between the actual coverage by the identification sensor and the approximated coverage by the visual sensor may generate cases where the conditions are not satisfied. The registered identifications of objects within the actual coverage may not be consistent with the estimated positions of them. For example, suppose that and are associated with and as a group. enters or leaves the coverage before does. In order to establish a single association for to or for to , and need to be registered or deregistered sequentially in the order that they enter or leave the coverage. However, regardless of the entering or leaving order by the visual sensor, and can be occasionally registered or deregistered at the same time due to the coverage uncertainty. In this case, the entering or leaving conditions in the group association are not satisfied for a single association. Another association problem to be considered is due to the inconsistent registration of identifications within the approximated coverage by the visual sensor. Since all identifications are not always registered in the coverage of the identification sensor due to the coverage uncertainty, or may not be consistent in the entering or leaving conditions in the group association. It indicates that the association system may not always correctly determine whether an object enters or leaves the coverage of identification sensors. The incomplete group association is introduced to effectively utilize the inconsistent registrations of identifications. An incomplete group association is established by

(11)

where each object is registered as an element of the incomplete group association with possible identification candidates.

Suppose that identification is not registered but is registered while both and are estimated within the coverage. Then, and are registered as elements of an incomplete group association. At every time instance when the condition in (11) is satisfied, new possible identifications are added to the candidates. However, due to the coverage uncertainty, it is not guaranteed that an object in an incomplete group association has its identification in its candidates. Also, objects without identifications may have irrelevant identifications in their candidates. Elements in an incomplete group are removed when they are associated with other estimated positions by a single or group association. While an associable identification in a group association is limited to the identification candidates of an object, the estimated position of an object in an incomplete group association can be associated with an identification beside its candidates. Therefore, an object in an incomplete group association establishes a single association by using

(12)

where is the set of positions in relation to incomplete group association with and is the set of the candidate identifications corresponding to .

3.2. Group Association by Temporal Set Maintenance

The group maintenance algorithm discussed before is based on the set of estimated positions and the set of identifications at each sampling time. However, the registration uncertainty of identifications may delay establishment of a group association. For example, the column of "Without Temporal Set Maintenance" in the table of Figure 4 shows the variation of sets of the estimated positions and identifications at each sampling time. Since and are registered at different sampling times, they are associated as an incomplete group association. The problem of an incomplete group association is to generate another incomplete group association until they are associated as a single or group association. For example, is registered in the coverage at , but the association system cannot clearly recognize it as a newly added due to its unassociated identifications. They all become an incomplete group association again by the condition in (11).

Figure 4
figure 4

Illustration of a case in which group association is not established by the registration uncertainty of identifications.

In order to increase the establishment of a group association, the association system can keep temporally registered identifications at different sampling time, until objects do stay within the coverage. denotes the temporally maintained set of identifications in the coverage and this set is updated by

(13)

If an object leaves the coverage, should not keep the previously registered identifications because the association system does not know which object leaves the coverage. By using the temporally maintained identification set, the association system has a group association condition by

(14)

The column of "With Temporal Set Maintenance" in the table of Figure 4 shows how the sets of estimated positions and identifications vary using the temporal set maintenance. are associated with as a group at . Since is associated with at the next sampling time, and are removed in and.

Figure 5 shows the performance comparison between association algorithms with and without the temporal set maintenance. Ten objects dynamically move around the surveillance region where four identification sensors are installed. At every time interval of identification sensor, each object is registered with probability of 0.5. It is assumed that the system fails in tracking when objects are adjacent within 0.3 m. The association simulation is repeated 100 times and the results are averaged in order to reflect the effect of the coverage uncertainty. The blue line indicates the simulation result with the temporal set maintenance. When the identification set is temporally maintained by the condition in (13), temporally unregistered identifications are still maintained in the set of . Then, it increases the possibility of establishing a group association increases rather than an incomplete group association. Since the objects in a group association are distinguished from other objects, the chance of establishing a single association also increases. As a result, the association rate increases faster with the temporal set maintenance than without the temporal set maintenance.

Figure 5
figure 5

Comparison between the association performance with and without temporal set maintenance. Average single association rateAverage group association rate

3.3. Association Stability in Mismatched Model

Association performance is also influenced by the discrepancy between the approximated coverage and the actual coverage. When the approximated coverage is greater than the actual coverage, positions of objects with nonregistered identifications can be estimated within the approximated coverage. Then, a group or incomplete group association increases by the condition in (5) or (11). This can frequently occur when objects move around the boundary of coverage of an identification sensor. Moreover, the effect of the smaller approximated coverage than the actual coverage is similar to the effect of the larger approximated coverage than actual coverage. Since the number of registered identifications is different from the number of estimated positions within the approximated coverage, this may increase group or incomplete group associations. However, the estimated positions of objects are eventually identified when single associations are established. While the inaccurate coverage model may delay the establishment of single associations, the number of single associations eventually increases by the object dynamics.

The irregular sensor coverage causes a false association with a noncorresponding identification when objects move around the boundary of the modeled coverage. For example, is not estimated but is estimated inside the coverage of the visual sensor. Also, on the other hand, only is registered inside the coverage. Then, can be falsely associated with by the condition in (3). Since a single association is established, the association system cannot confirm the false association immediately. However, the association system can cope with false associations using two approaches. One is a passive approach that uses the property of a group association. If objects in relation with a false association collide inside or outside the coverage, a false association naturally becomes a group or incomplete group association. The other approach is to confirm the association by checking whether duplicated identifications exist in the association system. If the false association is confirmed, the falsely associated position changes to an unassociated position. Therefore, false associations are eventually resolved by a group association or checking the identification with duplicate registrations at the coverage of different identification sensors.

3.4. Coverage Adjustment Scheme

At the initial state, the approximated radius of an identification sensor is set as a physical variable in the system. Since the radius is used to determine whether objects enter or leave the coverage of an identification sensor, it needs to be accurately estimated for the improved association performance. However, the association performance is also affected by the simultaneous entrance and the collision. These phenomena frequently occur where objects are densely populated. The association performance is not improved proportionally to the degree of the accurate estimation of the radius but the time to stabilize the association performance is inversely proportional to the degree of the accurate estimation of the radius. In order to adjust the initial radius of an identification sensor, we utilize the object dynamics of entering and leaving the coverage of an identification sensor.

The basic idea of the coverage adjustment scheme is to compare the number of estimated positions with the number of registered identifications within the coverage of an identification sensor. If the approximated radius of an identification sensor is accurate enough, the number of the estimated positions is mostly equivalent to the number of the registered identifications. Otherwise, it means that the approximated coverage differs from the actual coverage. The radius of an identification sensor is adjusted by checking the difference between them. In some cases, the system needs to check the farthest or closet estimated position from the center of an identification sensor. For example, when the number of the estimated positions is equivalent to the number of the registered identifications, the coverage of an identification should be adjust to the farthest estimated position. Then, the problem in the coverage adjustment scheme is to determine how degree the radius is adjusted by at each sampling time. Since the coverage of an identification sensor can vary temporally, the large change of the radius may cause a reverse effect and the association performance may degenerate. Thus, we use the average speed of tracked objects measured by the association system as the degree of the radius adjustment to be unsusceptible to the object dynamics.

The temporal change of sets of positions and identifications is utilized to adjust the initial coverage, while the coverage of an identification sensor is assumed to slowly vary. Since an association can be established at every sampling time , the approximated coverage of the visual sensor is also adjusted by the change of a radius at time . The average speed of tracked objects, measured by the association system, can be used to determine , since the registration is related to the object dynamics. Define as the adjusted radius between radii and for an identification sensor at time . The set of estimated positions within is denoted by and the set of registered identifications within is denoted by .

At time , the set of newly added estimated positions and registered identifications are represented, respectively, by and as

(15)

When the number of changes for each set is equal by , the radius is kept by

(16)

where denotes the adjusted coverage determined by added objects and its coverage is between and . On the other hand, when the number of newly registered identifications is smaller than the number of newly estimated positions at time by , the current radius is reduced by

(17)

If no identification is registered, and as shown in Figure 6, the current radius of the approximated coverage can be much larger than the radius of actual coverage. In this case, the estimated position with the minimum distance from a sensor position is used to determine the adjusted radius by

(18)

On the contrary, when the number of registered identifications is greater than the number of added estimated positions, , the current radius is enlarged by

(19)

In particular, if the number of added identifications is equal to the number of estimated positions within , as shown in Figure 7, the current radius of approximated coverage can be much smaller than the radius of actual coverage. Then, the radius is enlarged by

(20)

where is added for the extra coverage to prevent false associations by the irregular property of actual coverage.

Figure 6
figure 6

Illustration of coverage reduction when objects enter the coverage of an identification sensor.

Figure 7
figure 7

Illustration of coverage enlargement when objects enter the coverage of an identification sensor.

A similar radius adjustment can be applied to the case where objects leave the coverage of an identification sensor. A set of leaving positions and a set of leaving identifications at time are represented, respectively, by

(21)

When the number of leaving identifications is equal to the number of leaving positions, , the current radius is kept by

(22)

where denotes the adjusted coverage determined by leaving objects with the coverage between and . On the other hand, when the number of leaving identifications is greater than the number of leaving estimated positions, , the radius is reduced as

(23)

If the numbers of leaving identifications and estimated positions are equal, as shown in Figure 8, the radius is reduced by an estimated position having the maximum distance from the position of an identification sensor by

(24)

where is added for the extra coverage to prevent false associations by the irregular property of actual coverage. When the number of leaving identifications is smaller than the number of leaving estimated positions, , the radius is enlarged by

(25)

If the number of leaving identifications is zero and as shown in Figure 9, the current radius of approximated coverage is much smaller than the radius of actual coverage. In this case, the leaving estimated position with the maximum distance from a sensor position is used to determine the adjusted radius by

(26)

where is added for the extra coverage to prevent false associations by the irregular property of actual coverage.

Figure 8
figure 8

Illustration of coverage reduction when objects leave the coverage of an identification sensor.

Figure 9
figure 9

Illustration of coverage enlargement when objects leave the coverage of an identification sensor.

If and conflict with each other, the coverage of an identification sensor needs to be adjusted passively to prevent false associations. Therefore, the final radius is determined by

(27)

Moreover, the goal of the coverage adjustment is to prevent a significant discrepancy between the initial approximated coverage and the actual coverage as conserving current association information of objects. Hence, the adjusted radius should not violate the positions of objects having association information.

Figure 10 illustrates a simulation setup showing identification sensors and object trajectories. The range of is between 1 m and 6 m. The initial value of is 6 m and the initial value of is 1 m for extreme cases. The actual radius is 3 m for both sensors. The simulation assumes that identifications of objects are perfectly registered within the actual radii of the identification sensors.

Figure 10
figure 10

Illustration of simulation setup for coverage adjustment and objects locations in terms of tagging regions. Simulation setup showing identification sensors and objects trajectoriesObjects locations in terms of tagging regions

Figure 11 is the corresponding result of the coverage adjustment and association status for Figure 11. In identification sensor , initial coverage is slowly adjusted as objects enter the coverage. Every time any identifications are not registered, the coverage is changed by (18). When the number of entered or left positions may differ from the number of entered or left registrations, the adjusted radius is slowly changed by . In identification sensor , initial coverage is abruptly changed as reacting to certain registration by (20). The coverage of also has the similar variation by the mismatched number of positions and identifications. Eventually, the initial radii of the sensors converge on actual coverage as an association rate increases.

Figure 11
figure 11

Simulation result of coverage adjustment and association status for Figure 10.Coverage adjustmentAssociation status

3.5. Association Algorithm

Algorithm 1 summarizes the conditions for multiple objects association with the coverage uncertainty. If is in a group association, possible associable identifications are limited to . Objects in incomplete group associations also have identification candidates. Therefore, the possibility is increased that an estimated position will be uniquely paired with its identification. After the association system finishes checking the association conditions for each object in the coverage, it determines whether remaining objects are a group association or incomplete group association. Then, the association system removes associated identifications and estimated positions in all the sets of group associations and incomplete group associations. Single associations can also be established in this process if the number of elements in group associations is two.

Algorithm 1: The proposed association algorithm.

repeat

  Estimate of all detected objects by visual sensors at ;

  Register by identification sensors at ;

  Generate and at ;

  for to do

    if  then

       and are associated;

    else

       ifthen

       if the entering condition in the group association is satisfied then

          and satisfying above conditions are associated

         Remove them in group associations

       end

       else

       if the condition in (12) then

         and satisfying above conditions are associated

        Remove them in incomplete group associations

       end

       end

    end

  end

  for to   do

    if the leaving condition in the group association is satisfied then

      and satisfying above conditions are associated

     Remove them in group associations

    end

  end

  if The remaining objects satisfy the condition in (3) then

   Register them as a single association

  else if The remaining objects satisfy the condition in (5) then

      Register them as a group association;

  else

      Update candidate identifications of objects in group associations

  end

until Association system stops;

4. Evaluation

4.1. Simulation Setup

Figure 12 shows a simulation configuration which can be applied to a bank or an airport. An object enters and exits through gates where identification sensors are installed. The colored circular areas are the coverage of identification sensors. starts at , at , at , at , at , at , at , at , at , and at . The identification sensor is placed at , at , at , at , at , and at . Every visual sensor approximates the coverage radius of 3 meter. Objects are localized and tracked by visual sensors. The total number of sampling time is 130. In the simulation, the registration of identification is probabilistically determined to reflect the effect of the coverage uncertainty. The sampling interval of the identification sensor is 1 sec and the trajectories of the objects are dotted by also the sampling interval of 1 sec in the figure. At every time interval of identification sensor, each object is registered with probability of 0.5.

Figure 12
figure 12

Simulation configuration with the trajectories of ten objects (unit: meter).

The association performance for the identification is compared against the simple association rule. In the simple association rule, a position and an identification of a single object are associated when each signal exists in the sensor coverage [15, 16]. It is assumed that an object is localized and tracked by multiple cameras without failure. We use a simple object tracking algorithm since object models are not known to the association system.

4.2. Effect of Modeled Region Accuracy

When an object is associated with its identification by the object dynamics, (9) or (10) should be satisfied. The necessary condition is that an identification should be registered immediately after an object enters or right before an object leaves the region. However, satisfying these conditions depends on how accurately actual coverage is approximated, as shown in Figure 13. Also, localization errors by visual sensors cause ambiguity in the boundary of coverage. In order to analyze the effect of the modeled region accuracy, we utilize a parameter, , where is a distance between a modeled boundary and an actual boundary and is a corresponding variance. The actual size of coverage is determined by adding to the modeled size of coverage. Only when the position of an object is estimated within the actual size of coverage, an identification is considered registered in the system.

Figure 13
figure 13

Illustration of the effect of modeled region accuracy in association condition.

Figure 14 shows the simulation result of association performance according to the variance of the actual size of coverage. The association simulation is repeated 100 times and the results are averaged in order to reflect the effect of the mismatched coverage model. The result indicates that establishment of group association is affected by the discrepancy between the actual and modeled coverage. However, association performance is not significantly affected by the coverage variance. Group associations are eventually resolved to single associations by the object dynamics.

Figure 14
figure 14

The simulation of the association performance according to the variation of the modeled region. The variation of single and total association rateThe variation of group association rate

The coverage adjustment scheme also alleviates the effect of the discrepancy between the actual coverage and the approximated coverage. Figure 15 shows the simulation result of the coverage adjustment scheme with the current simulation configuration. The maximum and minimum radius of each identification sensor is set to be 1 m and 6 m, respectively, while the actual radius of each identification sensor varies from the initial radius 3 m at every 20 sampling times. The amount of each variation is chosen from the uniform distribution −1 m~ 1 m. The result demonstrates that the approximated coverage of each identification sensor is adaptively adjusted to the actual coverage of each identification sensor. Moreover, as the association rate increases, the accuracy of the approximated coverage also increases since the object dynamics are utilized for the coverage adjustment.

Figure 15
figure 15

Simulation result of coverage adjustment and association status. Coverage adjustmentAssociation status

4.3. Effect of Region Overlapping

Figure 16 shows a case in which identification sensor regions overlap each other due to largely approximated coverage. The overlapped regions may confuse the system. However, it does not affect association performance since each region has its own data sets for estimated positions and identifications. Instead, an overlapped region can be considered a separate region. Then, the system has the effect of having one more region. For example, the overlapped region of and is denoted as . Naturally, the sets of unassociated estimated positions and identifications in this region are represented by and , respectively. The sets for overlapped regions are generated based on initially obtained data,

(28)

This can increase a case to make a single or group association. However, this cannot significantly improve association performance because it is hard to define an optimal overlapping. Especially, when actual regions are not overlapped, the sets for overlapped regions become useless.

Figure 16
figure 16

Illustration of the effect of region overlapping.

Figure 17 shows a simulation configuration with the overlapped identification sensor regions ( and ), and Figure 18 shows the corresponding simulation results for Figure 17. The association simulation is repeated 100 times and the results are averaged in order to reflect the effect of the coverage uncertainty. Since the system can have more regions, single associations can be established faster. However, this does not indicate that a total association rate is increased. Overlapped regions split two sets of data into three sets of data. This can decrease the establishment of group associations depending on object movement patterns and registration performance of identification sensors. Therefore, this scheme has both sides in terms of the association performance.

Figure 17
figure 17

Illustration of the simulation configuration with overlapping regions ( and ).

Figure 18
figure 18

The simulation result of the association performance according to the variance of the modeled region for Figure 17.The variation of single and total association rateThe variation of group association rate

4.4. Association Performance

Figure 19 shows how the object association status changes with the coverage uncertainty. Figure 19(a) shows the registered identification of the objects as a function of the time for each identification sensor. Each color corresponds to each coverage of an identification sensor and the white color indicates that an object does not belong to the coverage of any identification sensor. Although identifications of objects are probabilistically registered due to the coverage uncertainty, positions of objects are eventually associated with their own identifications as shown in Figure 19(b). Figure 19(c) shows which IDs are registered for each object as a function of the time. Each color corresponds to each ID of an object and objects are eventually associated with their IDs, respectively.

Figure 19
figure 19

Object association status with the inconsistent registration of identifications when the proposed association method is used. Identification registrationObject association statusAssociated IDs for each object

Figure 20 shows the comparison of the association performances between the existing association method [15, 16] and the proposed association method in terms of the tracking performance. The tracking performance is defined as the case when objects tracking fails due to the collision. One case fails in tracking when objects are adjacent within 0.3 m and the other case uses 0.6 m. When an object fails in tracking due to the collision, it loses all associated identifications regardless of status such as a single association, group association, and incomplete group association. The association simulation is repeated 100 times and the results are averaged in order to reflect the effect of the coverage uncertainty. The simulation results show that the associations are well established when the system does not have the problem of the coverage uncertainty. The proposed association method establishes single associations faster than the existing method regardless of the effect of the coverage uncertainty. Especially, a group and incomplete group associations increase the average association rate. The result also demonstrates that the tracking performance has less influence on the proposed association method in terms of the average association performance. Although objects tracking fails more often, their identifications are maintained by a group or incomplete group association. The result also demonstrates that the proposed method is less vulnerable with a smaller number of identification regions in terms of association performance.

Figure 20
figure 20

The simulation of association performance comparison in terms of the number of identification regions and the tracking performance. 4 identification regions and tracking failure within 0.3 m4 identification regions and tracking failure within 0.6 m6 identification regions and tracking failure within 0.3 m6 identification regions and tracking failure within 0.6 m

4.5. Robustness against False Detection and False Tracking

The proposed method has robustness against two nonideal phenomena possibly caused by visual sensors. One case involves falsely detected objects according to the classification capability of detection algorithms. When objects are falsely detected inside the region, this leads to a group or incomplete group association. However, this is eventually resolved when the true position of an object is associated with its identification. The other issue is false tracking, which usually occurs when objects collide with each other. Identifications can be switched depending on the tracking capability. In this case, the proposed method utilizes a group association. Then, their identifications are also eventually found by the object dynamics. However, the system cannot clearly determine whether an object has identification information or not because of the coverage uncertainty.

5. Conclusions

The data association and management scheme is proposed to complement two different types of signals in heterogeneous sensor environment. Visual sensors estimate and track positions of objects, and identification sensors register identifications of objects. The uncertain sensing coverage of an identification sensor is approximately modeled for a simple association strategy. The location information of identification sensors and objects is utilized to resolve the association problems with the object dynamics. We also present a coverage adjustment method using the object dynamics around the coverage of the identification sensor. The simulation-based analysis shows that the association performance is improved as the time elapses even with realistic problems such as error of estimated positions, a discrepancy between approximated and actual identification sensor overage, variance of actual identification sensor coverage, and imperfect tracking. To improve the association performance, the identification sensors should be installed at the places where objects dynamically move around for a fast association establishment or recovery, as the associations are established by the object dynamics of crossing the coverage of identification sensors.

References

  1. Strobel N, Spors S, Rabenstein R: Joint audio-video object localization and tracking: a presentation general methodology. IEEE Signal Processing Magazine 2001, 18(1):22-31. 10.1109/79.911196

    Article  Google Scholar 

  2. Zhou H, Taj M, Cavallaro A: Target detection and tracking with heterogeneous sensors. IEEE Journal on Selected Topics in Signal Processing 2008, 2(4):503-513.

    Article  Google Scholar 

  3. Hu W, Tan T, Wang L, Maybank S: A survey on visual surveillance of object motion and behaviors. IEEE Transactions on Systems, Man and Cybernetics C 2004, 34(3):334-352. 10.1109/TSMCC.2004.829274

    Article  Google Scholar 

  4. Zhao W, Chellappa R, Phillips PJ, Rosenfeld A: Face recognition: a literature survey. ACM Computing Surveys 2003, 35(4):399-458. 10.1145/954339.954342

    Article  Google Scholar 

  5. Yang MH, Kriegman DJ, Ahuja N: Detecting faces in images: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 2002, 24(1):34-58. 10.1109/34.982883

    Article  Google Scholar 

  6. Brunelli R, Poggio T: Face recognition: features versus templates. IEEE Transactions on Pattern Analysis and Machine Intelligence 1993, 15(10):1042-1052. 10.1109/34.254061

    Article  Google Scholar 

  7. Grudin MA: On internal representations in face recognition systems. Pattern Recognition 2000, 33(7):1161-1177. 10.1016/S0031-3203(99)00104-1

    Article  Google Scholar 

  8. Garcia C, Tziritas G: Face detection using quantized skin color regions merging and wavelet packet analysis. IEEE Transactions on Multimedia 1999, 1(3):264-277. 10.1109/6046.784465

    Article  Google Scholar 

  9. Hsu RL, Abdel-Mottaleb M, Jain AK: Face detection in color images. IEEE Transactions on Pattern Analysis and Machine Intelligence 2002, 24(5):696-706. 10.1109/34.1000242

    Article  Google Scholar 

  10. Römer K, Schoch T, Mattern F, Dübendorfer T: Smart identification frameworks for ubiquitous computing applications. Wireless Networks 2004, 10(6):689-700.

    Article  Google Scholar 

  11. Roberts CM: Radio frequency identification (RFID). Computers and Security 2006, 25(1):18-26. 10.1016/j.cose.2005.12.003

    Article  Google Scholar 

  12. Roussos G, Kostakos V: RFID in pervasive computing: state-of-the-art and outlook. Pervasive and Mobile Computing 2009, 5(1):110-131. 10.1016/j.pmcj.2008.11.004

    Article  Google Scholar 

  13. Schulz D, Fox D, Hightower J: People tracking with anonymous and ID-sensors using Rao-Blackwellised particle filters. Proceedings of the International Joint Conference on Artificial Intelligence, August 2003

    Google Scholar 

  14. Shin J, Kumar R, Mohapatra D, Ramachandran U, Ammar M: ASAP: a camera sensor network for situation awareness. Proceedings of the 11th International Conference on Principles of Distributed Systems (OPODIS '07), December 2007 31-47.

    Google Scholar 

  15. Cho SH, Lee J, Deng X, Hong S, Cho W-D: Passive sensor based dynamic object association method in wireless sensor networks. Proceedings of the 50th Midwest Symposium on Circuits and Systems (MWSCAS '07), August 2007 1221-1224.

    Google Scholar 

  16. Cho SH, Lee J, Hong S: Passive sensor based dynamic object association with particle filtering. Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS '07), September 2007 206-211.

    Google Scholar 

  17. Kyong Y, Cho SH, Hong S: Local initiation method for multiple object association in surveillance environment with multiple cameras. Proceedings of the IEEE 5th International Conference on Advanced Video and Signal Based Surveillance (AVSS '08), September 2008 348-355.

    Google Scholar 

  18. Park K-S, Lee J, Stanaćević M, Hong S, Cho W-D: Iterative object localization algorithm using visual images with a reference coordinate. EURASIP Journal on Image and Video Processing 2008., 2008:

    Google Scholar 

  19. Yoon SM, Kim H: Real-time multiple people detection using skin color, motion and appearance information. Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication, September 2004 331-334.

    Google Scholar 

  20. Eng HL, Wang J, Kam AH, Yau WY: A bayesian framework for robust human detection and occlusion handling using human shape model. Proceedings of the 17th International Conference on Pattern Recognition (ICPR '04), August 2004 257-260.

    Google Scholar 

  21. Elzein H, Lakshmanan S, Watta P: A motion and shape-based pedestrian detection algorithm. Proceedings of the IEEE on Intelligent Vehicles Symposium, June 2003 500-504.

    Chapter  Google Scholar 

Download references

Acknowledgments

This paper was supported in part by the Mid-career Researcher Program of Korea Science and Engineering Foundation (KOSEF) Grant funded by the Korea government (MEST) (no.2010-0000487) and the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MEST) (no. 2010-0027499). Part of this paper was presented at 6th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 2009) Genoa, Italy, September 2009.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seong-Jun Oh.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Cho, S.H., Hong, S., Moon, N. et al. Object Association and Identification in Heterogeneous Sensors Environment. EURASIP J. Adv. Signal Process. 2010, 591582 (2010). https://doi.org/10.1155/2010/591582

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/591582

Keywords