Traffic Data Collection for Floating Car Data Enhancement in V2I Networks
© D. F. Llorca et al. 2010
Received: 19 November 2009
Accepted: 5 July 2010
Published: 25 July 2010
This paper presents a complete vision-based vehicle detection system for floating car data (FCD) enhancement in the context of vehicular ad hoc networks (VANETs). Three cameras (side-, forward- and rear- looking cameras) are installed onboard a vehicle in a fleet of public buses. Thus, a more representative local description of the traffic conditions (extended FCD) can be obtained. Specifically, the vision modules detect the number of vehicles contained in the local area of the host vehicle (traffic load) and their relative velocities. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision modules with the data supplied by the CAN Bus and the GPS sensor. This information is transmitted by means of a GPRS/UMTS data connection to a central unit which merges the extended FCD in order to maintain an updated map of the traffic conditions (traffic load and average road speed). The presented experiments are promising in terms of detection performance and computational costs. However, significant effort is further necessary before deploying a system for large-scale real applications.
Floating car data (FCD) refers to technology that collects traffic state information from a set of individual vehicles which float in the current traffic. Each vehicle, which can be seen as a moving sensor that operates in a distributed network, is equipped with positioning (GPS) and communication (GSM, GPRS, UMTS, etc.) systems, transmitting its location, speed, and direction to a central control unit that integrates the information provided by each one of the vehicles.
FCD systems are being increasingly used in a variety of important applications since they overcome the limitations of fixed traffic monitoring technologies (installation and maintenance costs, lack of flexibility, static nature of the information, etc.). We refer to  for general background concerning the most representative FCD activities in Japan, Europe, and the United States.
FCD can be used by the public sector to collect road traffic statistics and to carry out real-time road traffic control. The information provided by FCD systems can be supplied to individual drivers via dynamic message signs, PDA devices, satellite navigation systems, or mobile phones, including dynamic rerouting information. Thus, drivers would be able to make more informed choices, spending less time in congested traffic. In addition, the knowledge of the current traffic situation can be also used to estimate time of arrival of a fleet of public transport vehicles and, furthermore, to plan and coordinate the movements of the fleet (fleet management) so that driving assignments can be carried out more efficiently. Besides previous applications, the use of FCD entails environmental benefits since it can be used to reduce fuel consumption and emissions.
The basic data provided by FCD systems (vehicle location, speed, and direction) can be enriched using new onboard sensors (ambient temperature, humidity and light, windshield wiper status, fog light status, fuel consumption, emissions, tire pressure, suspension, emergency brake, etc.) which are centralized by means of the controller-area-network (CAN) bus. Such data can be exploited to extend the information horizon including traffic, weather, road management, and safety applications . In addition, computer vision systems can be included in order to improve the automatic detection of potentially interesting events and to document them by sending extended data .
In order to provide ubiquitous coverage of the entire road network, a minimum representation of the total passenger car fleet has to be used, since each moving sensor (each vehicle) only supplies information about its status. The fact that everyday road users have to be asked to share information regarding their movements and speeds arises privacy issues that have to be addressed. Many potential road travellers may be reluctant to join FCD projects because of violations of their privacy due to permanent traceability or possible liability in case of speed limit violations. Thus, the fundamental concept for FCD systems calls for no identification information to be sent with the basic data, which can be easily implemented from a technical perspective. For example, in  a general method for anonymization of FCD by deriving pseudonyms for trips is presented.
Another approach consists of using the information supplied by a specific fleet of vehicles, rather than information coming from individual road users. Taxis or public transport buses can be used due to the extended periods of time they spend on the urban road network. Although taxis and buses provide a major source of innercity traffic information because of the time they spend mobile, they have limitations. Problems arise if the taxi drivers, through detailed knowledge of the local road network, take steps to avoid congested areas which will not be reported . Traffic load perception may be lower than the actual one if reserved taxis or buses lanes are used. On the contrary, privacy issues are not as critical as before, especially when using a fleet of public transport buses.
This paper presents a complete vision-based vehicle detection system onboard a fleet of public transport buses with the aim of improving the data collected in FCD applications. The proposed system has been developed in the framework of the GUIADE project. Three cameras covering the local environment of the vehicle are used: forward-rear- and side- looking cameras. The system obtains under certain constraints, such as good weather and daytime conditions, the number of vehicles in the local range of the bus as well as their relative position and velocity. This information is combined with the data provided by regular FCD systems (global location, speed, and direction), obtaining a more detailed description of the local traffic load and the average speed. The communication system between the vehicles and the central control unit is based on wireless technology via GPRS/UMTS cellular protocols. Finally, the central unit integrates the data collected by the fleet in order to generate updated traffic status maps.
The remainder of this paper is organized as follows: the description of the system including the wireless communication scheme is summarized in Section 2. Section 3 describes the vision-based vehicle detection system as well as the spatial and temporal integration of the collected data. Experimental results that validate the proposed approach are presented in Section 4. Finally, conclusions and future works are discussed in Section 5.
2. System Descripction
The vehicle-to-infrastructure (V2I) communication system is based on the geographic coverage provided by cellular networks. General packet radio service (GPRS) and universal mobile telecommunications system (UMTS) are used to connect each vehicle with the central control unit. Each vehicle provides information that can be divided in three main groups.
Standard FCD information: vehicle identifier (2?bytes), timestamp (11?bytes), GPS position (8?bytes), speed (2?bytes), and direction (2?bytes).
Vehicle status information: ambient temperature (2?bytes), humidity (2?bytes), light (2?bytes), windshield wiper status (1?byte), fog light status (1?byte), fuel consumption (4?bytes), and emissions (4?bytes).
Extended FCD information: globally referenced average traffic load (2?bytes) and average road speed for a measured segment travel time (2?bytes).
As can be observed, the total message size per vehicle is 45 bytes. The extended FCD information is supplied to the central unit at a frequency of 1 Hz. Accordingly, the bandwidth currently demanded by vehicular communication in the communication channel, that is, the vehicle throughput, is 360 bps without overheads. This value can be considered negligible taking into account the available bandwidth and the proposed FCD architecture.
The central control unit integrates the information provided by each one of the vehicles in order to compute updated traffic and weather maps which will be used for fleet management tasks as well as to estimate the time of arrival.
The vehicle-to-vehicle (V2V) communication system is defined as a backup communication system based on a wireless-fidelity (WiFi) IEEE 802.11a/b/g interface. In situations where the cellular network is not working, in-range vehicles will exchange the most updated information available.
One of the main advantages of the proposed approach is that it does not need to deal with privacy issues since the floating vehicles correspond to a fleet of public transport buses.
3. Vision-Based Traffic Detection System
In this section, we present the main contribution of this work: a complete vision-based traffic detection system which enhances the data supplied by standard FCD systems. The benefits of using computer vision instead of other technologies such as radar-based systems can be summarized as follows. Computer vision systems can compensate for the lower angular resolution of the low-cost radar and the increased appearance of ghost radar targets (guard-rails, railings, lamp posts, reflections, etc.). These false positives are relevant and they cannot simply be ignored. The camera has very good angular resolution and can be used to determine height, width, and lateral speed of the target. Pattern recognition can be used to classify the object and even weakly reflective targets such as pedestrians can be detected. Moreover, the cost of a vision system is significantly lower than the cost saved by using the simpler radar. A vision system, in addition to overcoming cost reduction problems, can contribute to the system features such as road analysis and scene understanding.
3.1. Lane Detection
where and represent the clothoid horizontal curvature parameters, and stand for the clothoid vertical curvature parameters, while , , and are the lateral error and orientation error with regard to the centre of the lane and the width of the lane, respectively. The clothoid curves are then estimated based on lane marking measurements using a Kalman filter for each lane.
3.2. Side Vehicle Detection
Side vehicle detection module  relies on the computation of optical flow. In order to reduce computational time, optical flow is computed only on Canny points in the image. Canny edge pixels are consequently matched and grouped together in order to detect clusters of pixels that can be considered as candidate vehicles in the image. Classical clustering techniques are used to determine groups of pixels, as well as their likelihood to form a single object. Even after pixels clustering, some clusters can still be clearly regarded as belonging to the same real object. A second grouping stage (double-stage) is then carried out among different clusters in order to determine which of them can be further merged into a single blob. For this purpose, simple distance criteria are considered. Two objects that are very close to each other are finally grouped together in the same cluster. The reason for computing a two-stage clustering process relies on the fact that by selecting a small distance parameter in the first stage, interesting information about clusters in the scene can be obtained. Otherwise, using a large distance parameter in the single clustering process, highly gross clusters would have been achieved, losing all information about the granular content of the points that provide optical flow in the image.
The selected clusters constitute the starting point for locating candidate vehicles in the image. For that purpose, the detected positions of clusters are used as a seed point to search for a collection of horizontal edges that could potentially represent the lower part of a car. The candidate is located on the detected horizontal edges that meet certain conditions of entropy and vertical symmetry. Some of the most critical aspects in side vehicle detection are subsequently listed: ( ) shadows on the asphalt due to lampposts, other artefacts or a large vehicle overtaking the ego-vehicle on the right lane; ( ) self-shadow reflected on the asphalt (especially problematic in sharp turns like in round-about points), or self-shadow reflected on road protection fences; ( ) robust performance in tunnels; and ( ) avoiding false alarms due to vehicles on the third lane.
3.3. Forward and Rear Vehicle Detection
In a second step, vertical edges ( ), horizontal edges ( ), and grey level ( ) symmetries are obtained, so that, candidates will only pass to the next stage if their symmetries values are greater than a threshold. The vertical and horizontal edges symmetries are computed as listed inAlgorithm 1.The grey level symmetry computation procedure is shown inAlgorithm 2.Some examples of the three types of symmetries are depicted in Figure 10.
Algorithm 1: Vertical and horizontal edges symmetries computation procedure.
Algorithm 2: Gray level symmetry computation procedure.
In last equation, represents the measurement matrix and is the noise associated to the measurement process. The purpose of the Kalman filtering is to obtain a more stable position of the detected vehicles. Besides, oscillations in vehicles position due to the unevenness of the road makes coordinate of the detected vehicles change several pixels up or down. This effect makes the distance detection unstable, so a Kalman filter is necessary for minimizing these kinds of oscillations.
3.4. FCD Integration
As depicted in Figure 3, the FCD integration or Data Fusion module uses three sources of data: the measurements provided by the GPS, the data supplied by the CAN bus, and the output obtained from the three vision-based vehicle detection modules. Whereas the GPS and the CAN bus sample frequency is 1 Hz, the vision-based system operates in real-time at 25 frames per second (25 Hz). The proposed data fusion scheme provides information at the lowest sample frequency (1 Hz) covering two consecutive GPS measurements, the vehicle speed (via CAN bus) and the outputs of the vision module.
where and represent the distance between the host vehicle and vehicle at frames and , respectively, corresponds to the sample time, is the host vehicle speed provided by the CAN bus, and is the number of detected vehicles. Note that the distance values correspond to filtered measurements since they are obtained from the first two elements of the Kalman filter state vector ( and ) using known camera geometry and ground-plane constraints.
Two consecutive GPS measurements define both a spatial and a temporal segment. The temporal segment corresponds to the GPS sample time (1 second), and the spatial segment will be defined as the globally referenced trajectory between the two GPS measurements. In order to obtain the extended FCD information (i.e., the road traffic load and the road speed) for this spatio/temporal segment we integrate the values supplied by the vision modules during 25 consecutive frames. With this approach a dense coverage of the road traffic load and the road speed can be assured for host vehicle speeds up to 180 km/h since the total range of the vision module covers more than 50 m (25 meters for both the rear and the forward looking modules; the side range covers up to two third parts of the bus length in the adjacent lane). Obviously this maximum speed will never be exceeded by a public bus. This approach facilitates further map-matching tasks since the extended FCD information between two consecutive points will always be globally referenced.
4. Experimental Results
The system was implemented on a PC Core 2 Duo at 3.0 GHz and tested in real traffic conditions using CMOS cameras with low-resolution images (320 240). After training and test, a tradeoff point has been chosen at detection rate (DR) of 95% and false positive rate (FPR) of 5% for the rear-SVM classifier and at DR of 90% and FPR of 6% for the forward-SVM classifier. We have to note that these numbers are obtained in an offline single-frame fashion, so that, they will be improved in subsequently stages. In addition, the lane detection system reduces the searching area and the number of false candidates passed to further stages.
5. Concluding Remarks
This paper presented a complete vision-based vehicle detection system that enhances the data supplied by FCD systems in the context of vehicular ad hoc networks. The system is composed of three vision subsystems (side, forward and rear subsystems) that detect the traffic load and the relative velocities of the vehicles contained in the local area of the host vehicle. Under certain constraints, such as good weather and daytime conditions, absolute velocities, and global positioning are obtained after combining the outputs provided by the vision modules with the outputs supplied by the CAN Bus and the GPS sensor. Standard FCD systems provide the vehicle position, speed, and direction. The proposed approach extends this information by including more representative measurements corresponding to the traffic load and the average road speed.
In order to cover the entire road network, the proposed vision-based system is defined for being installed onboard a fleet of public buses where privacy is a minor issue. The extended packets collected by each moving vehicle are transmitted to the central unit by means of a GPRS/UMTS data connection. The central unit merges the extended FCD in order to maintain an updated map of the traffic conditions (traffic load and average road speed).
The presented experiments are promising in terms of detection performance and computational costs. However, significant effort is further necessary before deploying a system for large-scale real applications. For this purpose, new experiments will be carried out merging the data collected by more than one vehicle, including map-matching techniques and further analysis on V2I and V2V communications (e.g., using repetition based MAC protocols ). In addition, the proposed vision-based vehicle detection system will be extended to deal with complex weather conditions (e.g., wet or snowy roads) as well as night-time conditions.
This work has been supported by the Spanish Ministry of Science and Innovation by means of Research Grant TRANSITO TRA2008-06602-C03 and Spanish Ministry of Development by means of Research Grant GUIADE P9/08.
- Bishop R: Intelligent Vehicle Technologies and Trends. Artech House, Boston, Mass, USA; 2005.Google Scholar
- Messelodi S, Modena CM, Zanin M, De Natale FGB, Granelli F, Betterle E, Guarise A: Intelligent extended floating car data collection. Expert Systems with Applications 2009, 36(3, part 1):4213-4227. 10.1016/j.eswa.2008.04.008View ArticleGoogle Scholar
- Rass S, Fuchs S, Schaffer M, Kyamakya K: How to protect privacy in floating car data systems. Proceedings of the 5th ACM International Workshop on Vehicular Inter-Networking (VANET '08), September 2008 17-22.View ArticleGoogle Scholar
- Day P, Wu J, Poulton N: Briefing Note on Floating Car Data. ITS Internacional; 2006.Google Scholar
- Sotelo MA, Nuevo J, Bergasa LM, Ocaña M, Parra I, Fernández D: Road vehicle recognition in monocular images. Proceedings of the IEEE International Symposium on Industrial Electronics (ISIE '05), June 2005 1471-1476.Google Scholar
- Sotelo MÁ, Barriga J: Blind spot detection using vision for automotive applications. Journal of Zhejiang University: Science A 2008, 9(10):1369-1372. 10.1631/jzus.A0820111View ArticleGoogle Scholar
- Christopher Burges CJ: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 1998, 2(2):121-167. 10.1023/A:1009715923555View ArticleGoogle Scholar
- Dalal N, Triggs B: Histograms of oriented gradients for human detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), June 2005 886-893.Google Scholar
- Derong Y, Yuanyuan Z, Dongguo L: Fast computation of multiscale morphological operations for local contrast enhancement. Proceedings of the 27th Annual International Conference of the Engineering in Medicine and Biology Society (IEEE-EMBS '05), September 2005 3090-3092.Google Scholar
- Dougherty ER: An Introduction to Morphological Image Processing. SPIE Optical Engineering Press; 1992.Google Scholar
- Balcones D, Llorca DF, Sotelo MA, et al.: Real-time vision-based vehicle detection for rear-end collision mitigation systems. Proceedings of the 12th International Conference on Computer Aided Systems Theory (EUROCAST '09), 2009, Lecture Notes in Computer Science 5717: 320-325.Google Scholar
- Kalman RE: A new approach to linear filtering and prediction problems. Journal of Basic Engineering, Series D 1960, 82: 35-45. 10.1115/1.3662552View ArticleGoogle Scholar
- Hassanabadi B, Zhang L, Valaee S: Index coded repetition-based MAC in vehicular ad-hoc networks. Proceedings of the 6th IEEE Conference on Consumer Communications and Networking Conference, 2009Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.