V-DaT: A Robust method for Vehicle Detection and Tracking

: Vision-based traffic surveillance has been one of the most promising fields for improvement and research. Still, many challenging problems remain unsolved, such as addressing vehicle occlusions and reducing false detection. In this work, a method for vehicle detection and tracking is proposed. The proposed model considers background subtraction concept for moving vehicle detection but unlike conventional approaches, here numerous algorithmic optimization approaches have been applied such as multi-directional filtering and fusion based background subtraction, thresholding, directional filtering and morphological operations for moving vehicle detection. In addition, blob analysis and adaptive bounding box is used for Detection and Tracking. The Performance of Proposed work is measured on Standard Dataset and results are encouraging.


Introduction
In general, traffic monitoring and control mechanism are employed by different socioeconomic and administrative entities including private/public companies, government's administrative agencies to enable efficient and safe traffic navigation and control.
Certain static camera setup supervising certain specific object or scene is usually stated as asurveillance system.Recognizing intruders or targeted objects is often as vital phase forimage or video (say scene) analysis and object segmentation.The identification of an object in a scene leads its separation or the localization from the background that eventually enables classification.The predominant purpose of moving object detection and segmentation is to retrieve the significant information about the moving vehicle from certain video sequences that as a result enables tracking and further classification and decision processes.
Vehicle detection is vital in major video based applications such as video surveillance, vehicle tracking, and vehicle tracking under occlusion, and pattern recognition and classification.Traditional mechanisms for moving object segmentation comprise inter-frame differentiation, background subtraction and optical flow techniques [1].However, the accuracy of the vehicle detection and tracking primarily depend on the vehicle region segmentation and therefore there is the need to have an optimal vehicle detection approach.
Vision-based traffic surveillance has been one of the most promising fields for improvement and research.Still, many challenging problems remain unsolved, such as addressing vehicle occlusions and reducing false detection.Although the sensing technology provides overwhelming benefits, the stakeholders often forget that the detection of every vehicle in the video is extremely difficult considering changing environmental conditions such as illumination and occlusion.Many detection algorithms currently employed in the commercialized systems work well under ideal sets of conditions.However, many lack in adaptability to the dynamic nature of highway traffic.

Related Work
Page et al. [2] developed moving vehicle detection and tracking system using Gotcha radar systems.Authors applied the feedback information of the tracking component to deal with detection issues and allied false alarm problems.The authors derived a mathematical model to process multichannel SAR data so as to alleviate the combined influences of moving target defocus and clutter caused interference.The algorithm applies MRP mechanism dynamically in a STAP model so as to focus up moving vehicles and optimize signal to clutter ratios for better performance.Jyothirmai et al. [3] proposed a video based surveillance system for security purpose.They applied background subtraction based moving vehicle detection and tracking algorithm.In addition, they introduced various threshold levels to identity moving object of certain sizes.Li et al. [4] developed an adaptive background subtraction model in combination with a virtual detector and blob tracking method for vision based vehicle detection and tracking.Bhaskaret al. [5] addressed the vehicle detection and tracking in traffic video data.Authors applied Gaussian Mixture Model (GMM) and blob detection approach to perform vehicle detection and tracking.Brahme et al. [6] applied blob analysis technique to perform vehicle counting to be applied for traffic surveillance.Authors applied moving object segmentation and blob analysis to perform moving vehicle detection and tracking.At first, they performed blob analysis, based on which they extracted significant features.Based on blob analysis, authors performed vehicle speed estimation.Cho et al., [7] developed visual feature extraction model for object detection and tracking system which they were later applied for vehicle, pedestrians, and bicyclists detection.The retrieved visual recognition information was applied to enhance object detection and data association model that eventually enabled movement classification.Demars et al. [8] developed moving vehicle detection and tracking in full motion video (FMV) using aerial imaging systems.Researchers emphasized on enhancing the probability of detection and tracking even in cluttered urban environments.To achieve this, they suppressed false alarms by amalgamating the detection outputs and related features from varied spectral bands.Authors used GMM model for background pixel detection which was used to identify vehicle (as foreground).Authors amalgamated the features extracted from the individual spectral band so as to construct multi-spectral target region.The detected target candidates were connected to the targets from a tracking database by means of matching associated features from the scale-invariant feature transform (SIFT).Li et al. [9] developed MVDT comprising three functional phases, road detection, vehicle detection, and vehicle tracking.To perform road detection, they applied a plane-fitting feature, followed by the use of segmented blob and snakes blob features and artificial neural network (ANN) to detect vehicle on road.Lu et al. [10] to provide vehicle detection in daylight traffic developed SEAP (Simple but Efficient After Process) to verify the detection outcomes in an accurate manner which was then processed with Adaboost detector to perform car detection in dense traffic.Further, authors developed 4-states tracking algorithm using Kalman linear filter to perform vehicle (here car) tracking.Their applied 4-states tracking algorithm solved the issue of false positive issues in dense traffic condition.To achieve this, they applied FSM (Finite State Machine) concept to perform tracking.Kowsari et al. [11] developed multilayer, real-time vehicle detection and tracking system where they applied stereo vision, optical flow technique and multi-view AdaBoost detectors.Applying a ground plane measures retrieved from stereo information, they generated certain hypotheses and used trained AdaBoost classifiers other than fast disparity histogramming, for Hypothesis Verification (HV) purposes.To perform tracking, authors applied Kalman filter and motion vectors from optical flow that strengthened their tracking model.Fu et al., [12] presented vehicle detection and tracking system using SVM-based particle filtering model that incorporates SVM score in conjunction with sampling weights.Here, the sampling weights were applied to form a probability distribution of samples by the SVM score.Li et al. [13] developed a vision-based approach to perform forward vehicle detection and tracking.At first, they applied histogram method to perform shadow segmentation beneath vehicle region.They generated the initial candidates by joining horizontal as well as vertical (shadow) edge features.Further they verified the obtained initial candidate features by means of a vehicle classifier model functional on the basis of histogram of gradient and SVM.Authors applied Kalman filters to perform vehicle tracking.Cui et al. [14] developed a robust multilane detection and tracking method where they used in-vehicle mono-camera and a forward-looking LIDAR technique.Their approach could address the key issues in real world scenarios, especially in urban driving situations.They applied steerable filters to perform lane feature detection.In addition, they applied LIDAR-based image drivable space segmentation to identify lane marking point's validations.Furthermore, the Random Sample Consensus approach was used for robust lane model fitting.Thus, the detected lanes initialize particle filters to perform vehicle tracking, without demanding the ego-motion information.
Lee et al., [15] applied the concept of tracking feature points to perform real-time vehicle detection and lane change detection.Authors stated their approach as switch-independent which was not depending on the illumination conditions.Their approach comprised three phases, corner feature point extraction, (vehicle) feature point tracking and lane-change event detection and violating vehicle detection.
Yao et al. [16] presented a fast and robust road curb detection algorithm using 3D lidar data and Integral Laser Points (ILP) features.Range and intensity data of the 3D lidar is decomposed into elevation data and data projected on the ground plane.First, left and right road curbs are detected for each scan line using the ground projected range and intensity data and line segment features.Then, curb points of each scan line are determined using elevation data.The ILP features are proposed to speed up the both detection procedures.Finally, parabola model and RANSAC algorithm is used to fit the left and right curb points and generate vehicle controlling parameters.
Afrin et al. [17] developed a system that facilitates autonomous speed breaker data collection, dynamic speed breaker detection and warning generation for the on-road drivers.Their system incorporates real-time tracking of driver, vehicle and timing information for speed breaker rule violations.
Li et al. [18] developed a video-based traffic information retrieval model where they tracked and classified passing vehicle under crowded traffic conditions.At first, they obtained the type and speed of each passing vehicles.Authors applied adaptive background subtraction model to perform vehicle detection.In later stage, executed shadow removal and road region detection to enhance efficiency.Furthermore, to minimize the classification errors, the space ratio of the blob and data fusion were used to reduce the classification errors caused by vehicle occlusions.
Lin et al., [19] developed image tracker that contains three parts, border detection, image tracking and a traffic monitoring unit.The border-detection model is a unique designed circuit board that characterizes speedy CCD image processing and feature extraction.With the frame rate of 60 Hz they performed border detection of an image with resolution of 320×240 pixels.Adoptive active contour models and Kalman filtering methods were developed to perform tracking of the multi-lane moving vehicles.
Rashid et al. [20] used time-spatial image by presenting the detection and classification of vehicles from a video and it is the extension of the detection and classification of vehicles.
Chai et al [21] estimated the traffic parameters of vehicle motion by using an automatic vehicle classification and tracking technique at crossroads.This technique is on the basis of projective renovation of video frames and it is excellent capability to categorize detected vehicles and compute parameters of vehicle motion at cross roads.Kothiya et al., [22] to identify or position the moving object in frame, the first step in the tracking is to detect the object.Later detected objects are classified as vehicles, human, swaying tree, birds and other moving objects.Collecting the objects in consecutive frames is the most challenging task in image processing.Due to complex object motion, irregular shape of object, occlusion of object to object and object to scene and real time processing requirements numerous challenges are exhibited.Some of the objects tracking usefulness are surveillance and security, traffic monitoring, video communication, robot vision and animation.In [23] the moving objects are segmented out from the given video frames which include trajectory motion estimation of the objects by computing the motion vectors; identification of the structuring element and finally using morphological operators to improve the quality of the foreground mask generated.

Proposed Method
The proposed system employs a directional filtering scheme for detecting moving vehicles, while considering its intensity and orientation variance as detection parameter.In addition, the multi-directional intensity strokes estimation approach has been applied that plays significant role for distinguishing vehicle region from other background contents.The implementation of the robust morphological scheme including thinning and dilation parameter with well calibrated content region identification makes the proposed system more robust and efficient.The feature clustering scheme with heuristic filtering based blob analysis makes this proposed model more efficient and precise for accurate moving vehicle detection.To enable better visualization of traffic monitoring, a bounding box generation scheme has been incorporated.2) have been converted into gray color image (Figure 2), which is then followed by filtering and vehicle segmentation process.

Moving Vehicle Region Detection
Unlike conventional approaches, in this work, it is intended to construct a feature map using multiple significant characteristic of the moving vehicle such as, vehicle edge strength, density, variance of orientations along with background subtraction scheme, as discussed in the previous section.Unlike majority of existing systems, where only the background extraction has been used as the foundation to detect vehicle, in this work a multilevel optimization model has been proposed that ensures efficient video analysis and feature mapping for final video tracking purposes.Here, the resulting mapped feature is a gray-scale image having input images of the same size.In this model, the pixel intensity signifies the probability of vehicle in the current frame.

Background Extraction
In order to retrieve such image here implemented background subtraction model.The proposed model applies the mean of all video frame's pixel values.Thus, retrieving the background image, the Region of Interest (ROI) extraction has been performed.Here, vehicle moving towards camera is tracked where a single lane region is taken traffic track over which the vehicle has to be tracked.In other way, the camera is mounted on road side in such manner that only one lane is visible for vehicle detection and tracking.At first the video RGB frames are converted into gray color.In the proposed model, before processing for background subtraction, a motion integer background extraction has been done, where the background objects such as tree or other non-vehicle objects are eliminated to retain intact detection of moving vehicle.To perform background subtraction different morphological functions and connected component based scheme has been applied, where all the ROI feature vectors in conjunction with each other (connected component) signifies the vehicle region.Further, the morphological closing and thinning operators have been applied to segment the vehicle region.
In the proposed model, a multi-directional filtering and fusion scheme has also been introduced in the proposed model, that along with above discussed background subtraction, assures optimal performance for precise background extraction and moving candidate region (vehicle) detection.The multi-directional filtering and fusion is presented in the next sub-section of the presented work.The developed model intends to avoid any irrelevant object (i.e., waving trees, road marking, etc.) and its movement causing ambiguity for precise vehicle detection and its tracking.The difference between individual frame and allied background model after multiplying both with extracted ROI is used to perform vehicle detection.The background subtracted frame is given in Figure 3.The proposed system employs a thresholding based segmentation scheme that converts grey scale image to binary image.In fact, the selection of an optimal threshold plays a vital role in assuring optimal image segmentation, especially in thresholding based segmentation.Therefore, in this work, to distinguish foreground moving vehicle from the static background, thresholding scheme has been used.The considered conditional thresholding mechanism is given in the following equation (1).
where (, ) represents the threshold video frame, ℎ depicts the threshold applied and (, ) represents the current frame.

Directional filtering
In order to achieve optimal performance, the magnitude of the second derivative of intensity is applied as the edge strength measure, because it facilitates optimal intensity peak detection that usually characterize vehicle in the current video frame.In this work, the edge density of the moving vehicle has been estimated on the basis of the average edge strength within a frame, which has already been converted from RGB video to the gray scale image.To enhance the vehicle detection efficiency, a multi-directional filtering has been proposed that estimates the variance of orientations in four directions 0o, 45o, 90o and 135o.Here, 0o indicates horizontal scan, 90osignifies vertical orientation, and 45o and 135opresent diagonal orientation.For simplistic implementation, only horizontal and vertical directional filtering has been applied which has been followed by fusion or the amalgamation of all the directional feature vectors to characterize the detected vehicle.The predominant advantage of this approach is that unlike conventional pixel by pixel and raw-column based scanning system, it performs image scanning in multiple direction simultaneously that as a result enhances computational efficiency and detection rate.In this work, the convolution concept has been introduced with a compass operator (Sobel operator as depicted in Figure 4) that retrieves multi-directional edge intensity (=0180,45,90,135) estimation of the moving vehicle frames.These all directional intensity vectors comprise all the characteristic of the edges of the frame including moving vehicle that enables effective vehicle detection and seed estimation.This is the fact that vertical and horizontal edges can form the most significant strokes of the object (here moving vehicle) in an image and its lengths can also represent the dimensional characteristics of the corresponding vehicle, which can be significantly used to classify vehicles based on it geometry.Extracting and grouping these strokes, the vehicle region with different heights or dimensions can be located precisely.In practical scenarios, there can be both strong vertical as well as horizontal edges, reflecting the shape of vehicle.In addition, the horizontal and vertical edges generated by these moving objects can have large dimension, especially the length.Hence, performing classification of these edges into long and short edges can be significant to eliminate extreme large (vertical or horizontal) edges and the short edges can be considered for further processing for vehicle detection.Because of non-uniform background, color, intensity or illumination, long vertical edges generated by non-vehicle objects can have a large intensity and feature variance, such as pixel uniformity, color variations etc. Performing the thresholding process, such long vertical edges might turn out to be distorted short edges that as a result can cause false alarms.Similarly, non-uniform surfaces of the vehicle from various lighting, shadows and other features of the vehicle shape itself too cause broken vertical edges.To remove these false grouping introduced due to the broken edges, a two-stage edge generation scheme has been applied.In first stage the strong vertical edges are obtained as given in equation ( 2).

𝑉𝑒𝑟𝑡𝑖𝑐𝑎𝑙
(2) where 90 represents the vertical intensity edge image that is nothing else but the 2D convolution result of the original image having 90o kernel, | • | represents an operator which is used to achieve a binary outcome of the vertical edges.As this approach intends to retrieve the strong edges, it can't be stated to be susceptible to the threshold.In the second stage, it is intended to retrieve the weak vertical as depicted through following equations (3-5): In this process, the morphological dilation has been introduced that plays significant role in eliminating the impacts of a little skewed edges and a vertical linear structuring element × 1 which has been followed by the implementation of a closing operator so as to force the strong vertical edges clogged.There can be the trade-off on selecting the value of the size of structuring elements.Here, in this work, it has been assumed that the small value can be computationally efficient and consumes less time at the expense of false positives while a large value can significantly increase the precise detection but at the cost of elevated computational cost.
In the proposed model, considering the requirement of an effective and efficient system  has been assigned as  = (1/25) * ℎ that enables optimal vehicle detection results with an acceptable computation cost for a real-time vehicle surveillance system.The ultimate edges formed are the combination of strong as well as weak edges, which has been retrieved using equation (6).
In the proposed model, a morphological thinning operator has been implemented which is succeeded by means of a connected component labelling mechanism functions as shown in equation ( 7) & (8).
Here, the morphological thinning function raises the widths of the resulting edges by one pixel (i.e., increases edge thickness by one pixel).It is then followed by labelling of the vertical edges by connected component labelling operator.In the proposed model, 8 and 4-pixel connectivity has been applied for labelling of the edges.Performing labelling of the connected components, the individual edge has been uniquely labelled as a single connected component having distinctive component number.Thus, the labelled edge frame has been processed by a length labelling process that intends enables edge pixels presenting respective dimensions (i.e., lengths).Consequently, all the pixels allied to the same edge have been labelled with the same number which is proportional to its dimensional length.As, the higher value in the length labelled vehicle video frame represents a long edge, in this work a thresholding scheme has been employed to distinguish short edges (ℎ).This is also the matter of fact that achieving 100% automatic precise vehicle detection in moving space in highly intricate task, in this work, the efforts has been made to reduce the false negatives of missed detection.Here, in addition to the edge intensity and variance of orientation, a low threshold value has been applied to optimize vehicle detection and precise speed estimation for efficient traffic surveillance system.
The outputs of the directional filters (vertical and horizontal) are given in Figure 5 and Figure 6.The combined vehicle detected is given in Figure 7.

Feature Mapping
In this work and the proposed model, the practical facts that the regions with moving vehicle would have significantly higher edge density value, strength as well as variance of orientations as compared to the non-vehicle background regions.In the proposed system, these key characteristics have been exploited so as to enhance the vehicle region detection by means of generating a feature map values that significantly decreases the false regions and optimizes true candidate (moving vehicle) region detection.The overall process is illustrated through equation (9)(10)(11).
Here, the morphological dilation operator having a × structuring element has been applied for selecting the short vertical edge image so as to get precise vehicle region detection.In the proposed vehicle detection system, multidimensional or multi-orientation edge information (=90,180) have been used to refine (performing fusion of=90=180) the potential candidate moving vehicle detection.In equation (11),  represents the resulting feature map, and  represents normalization that performs normalization of the intensity (feature mapped values) in the range of [0, 255].The function ℎ(, ) has been applied that estimates the weight of pixel (, )based on the number of orientations of edges in a video frame.Applyingℎ (, ) function, the proposed approach discriminates the candidate regions (i.e., moving vehicle) from background regions.
The vertical (=90) and horizontal scanning (=180) outputs are given in Figure 6      The moving vehicles and its associated dimensional features can be clustered to localize the moving vehicle on the road.In fact, the characteristics of the components connected with moving vehicle are different from the static background.In practical scenarios, the characteristics such as the intensity of the feature map depicts the probability of vehicle in the current frame, certain simple thresholding can be applied to distinguish regions with higher vehicle possibility.Thus, implementing certain morphological dilation operator the close regions can be connected together while ignoring or isolating those regions located far away.In the developed vehicle detection model, a morphological dilation operator having square structuring element has been used that joints vehicle regions in the retrieved binary regions.

Heuristic filtering Based Blob Analysis
Being a highly sensitive application of traffic surveillance system, the filtration of retrieved region is of great significance.In this work, heuristic filtering scheme has been applied for blob analysis and unwanted blob removal.The proposed heuristic filtering scheme possesses two constraints, which functions for filtering out the blobs which don't possess vehicle regions or ROI.The first constraint functions for removing minute (very small region) and non-connected isolated blobs using threshold value.In this work, area of the blob region is considered rather than the absolute value for the individual blob.It enables the proposed system to function for the vehicle of any dimension or any size.In addition, the proposed model employs a second constraint that performs filtration of those specific blobs whose widths are very small as compared to the corresponding heights, because in realistic vehicles, usually height can't be more than length or width of the vehicle.Thus, the proposed system can efficiently remove insignificant blobs to make prediction more accurate and precise.

Boundary Boxes Generation
Retaining the blobs reflecting vehicle in the running frame, it has been enclosed inside boundary boxes.In the proposed model four pairs of the boundary box coordinates have been estimated using the highest and the lowest point or coordinates of the top, bottom, left and right points of the subsequent blobs reflecting vehicle in the running frame.To avoid any probable missing of the vehicle related pixels existing near or even outside the initial boundary, dimensional parameters (i.e., height and width) of the boundary box have been padded by a small amount.To make detection more precise, visible and road condition adaptive, in the proposed approach, the large boxes such as borders, highway dividers etc. have been ignored and an additional adaptive padding has been introduced that makes it more accurate and efficient, especially for better tracking purposes.The boundary boxes generated for each detected vehicle has been saved that makes tracking more efficient.The combined vehicle detected is given in Figure 10.

Vehicle Tracking
In this work, the proposed vehicle tracking system has been made on the basis of feature tracking concept.The features extracted have been tracked over sequential frames retrieved from input traffic video data.Unlike conventional approaches of tracking where researchers have used object matching algorithm based on Mahalanobis distance, in the proposed approach, track identification and replica matching based tracking system has been developed.Here, initially the feature mapping for all frames has been estimated and a track graph has been prepared.To eliminate the probability of any error, few initial frames have been ignored.Here, a track (a section of road area defined by user) has been deployed that traces the presence or passing of bounding boxes and thus indicating number of vehicles crossing the track.A search scheme has been used that searches bounding boxes in each frame and marks it for tracking status.The implemented function enables swift bounding box detection by means of a simultaneous horizontal and vertical search and match scheme.Detecting a bounding box while crossing the defined track, the vehicle has been counted and a template marking has been done that indicates the status of passing vehicle.The proposed system represents an object matching scheme that estimates distance between vehicle features or the object features in the previous frame, which has been stored in track graph metrics and instantaneous frame.In addition, an additional marking template for vehicle ID presentation and speed estimation has been used that makes system better realizable and perceptible.To evaluate the correctness of detection algorithms in videos, one should look into confusion matrix.A confusion matrix is a matrix plot of predicted versus actual classes of the samples.Usually in detection, the system may identify vehicle which may not be correct Vehicle.Therefore, to measure the performance of a system, two other different statistics known as precision and recall are employed.Precision measures the ratio of total number of correctly detected vehicles to the total number of detected vehicles by the system.Recall evaluates a fractional value of the total number of correctly detected Vehicles to the total number of expected Vehicles in a video.The precision value indicates that how the system is accurate in detecting only correct Vehicles, while recall value signifies that to what extent the system is capable in recalling and detecting all expected Vehicles.i.We calculate precision, recall and F-measure for Vehicles based on the confusion matrix.Table 1 shows the measure values for all the videos.Figure 12 gives a graphical representation of precision, recall, and F-measure values.From the tables we observe that obtained results shows the good accuracy with graphical representation.
Figure 13 shows the ROC curve for the proposed method with False Positive and True Positive rates.Further we also conduced experimentation on bench mark datasets ie KITTI and DETRAC and compared with Proposed method shown in figure 14.In addition we also compare the accuracy with precision and Recall shown in table 2.

Conclusion
Considering limitations of the existing systems, such as conventional background subtraction, noise and illumination sensitivity, etc., in this work phase a novel multi-directional filtering and fusion based background subtraction model was developed that considers intensity, moving pixel orientation etc. for moving vehicle detection.The proposed multi-directional intensity strokes estimation scheme was found to be strengthening the system for better moving vehicle candidate detection and tracking so as to distinguish moving vehicle region from other background images.In addition, the implementation of the enhanced thinning and dilation based morphological process has made proposed system more robust and accurate.Performing moving vehicle detection, feature mapping was done where feature clustering and heuristic filtering approach was incorporated, which made blob analysis more efficient to detect precise candidate vehicle region.Later, the boundary box generation was facilitated precise vehicle tracking.In addition, to the efficient moving vehicle detection and tracking, in this work, an efficient vehicle speed estimation scheme has been developed that enables real time vehicle Tracking.

Figure 1 :
Figure 1: Proposed Vehicle Detection and Tracking system

Figure 2 :
Figure 2: Shows input video frame and Conversion of Gray Video Frame

Figure 3 :
Figure 3: shows the Background subtraction Threshold Estimation

Figure 11 :
Figure 11: Shows the Vehicle detection 5. Experimentation In this work, to examine the effectiveness of the proposed moving vehicle detection and tracking for efficient traffic surveillance, the standard vehicle traffic data have been used.Here we collected about 10 sample videos of Traffic.The input video data are in RGB form, which are further converted into gray color format for processing.In order to perform the evaluation of the vehicle detection results,To evaluate the correctness of detection algorithms in videos, one should look into confusion matrix.A confusion matrix is a matrix plot of predicted versus actual classes of the samples.Usually in detection, the system may identify vehicle which may not be correct Vehicle.Therefore, to measure the performance of a system, two other different statistics known as precision and recall are employed.Precision measures the ratio of total number of correctly detected vehicles to the total number of detected vehicles by the system.Recall evaluates a fractional value of the total number of correctly detected Vehicles to the total number of expected Vehicles in a video.The precision value indicates that how the system is accurate in detecting only correct Vehicles, while recall value signifies that to what extent the system is capable in recalling and detecting all expected Vehicles.i.e., Figure 11: Shows the Vehicle detection 5. Experimentation In this work, to examine the effectiveness of the proposed moving vehicle detection and tracking for efficient traffic surveillance, the standard vehicle traffic data have been used.Here we collected about 10 sample videos of Traffic.The input video data are in RGB form, which are further converted into gray color format for processing.In order to perform the evaluation of the vehicle detection results,To evaluate the correctness of detection algorithms in videos, one should look into confusion matrix.A confusion matrix is a matrix plot of predicted versus actual classes of the samples.Usually in detection, the system may identify vehicle which may not be correct Vehicle.Therefore, to measure the performance of a system, two other different statistics known as precision and recall are employed.Precision measures the ratio of total number of correctly detected vehicles to the total number of detected vehicles by the system.Recall evaluates a fractional value of the total number of correctly detected Vehicles to the total number of expected Vehicles in a video.The precision value indicates that how the system is accurate in detecting only correct Vehicles, while recall value signifies that to what extent the system is capable in recalling and detecting all expected Vehicles.i.e., Precision= Total Number of Correctly Detected Vehicles/Total Number of Detected Vehicles Recall= Total Number of Correctly Detected Vehicles/ Total Number of Expected Vehicles

Figure 12 :Figure 13 :
Figure 12: shows the Results of Precision, Recall, F-measure Measures for Detection

Figure 14 :
Figure 14: shows the accuracy of Proposed by comparing KITTI and DETRAC

Table 1 :
shows the Precision, Recall, F-measure Measures for Detection V-DaT: A Robust method for Vehicle Detection and Tracking

Table 2 :
Performance analysis of multiple vehicle detection methods with benchmark datasets Techniques