An Automatic Vehicle Routing and Tracking Technique Based on Video and Image Processing Techniques

In this research work an advanced route tracking application is designed based on video and image camera stabilization technique. The initial input video can convert the images and contain noise as well as extreme distortions. Therefore the following elements can affect the route tracking as well as mis-information gathering. In this paper a multi-media tool based mute tracking application is introduced to obtain mad condition as well as route features. The initial images are filtered by using adaptive median filter to minimize the noise. After filtration road condition can easily identified and separate the target and background, moreover the boundary conditions helpful for extracting the neighborhood target images. The following process is working based on angle, derivative-path and curvature calculations. After this primary process, preview image can be obtained on the screen, according to this the driving speed can be controlling to front wheel rotations. The random pursuit matching algorithm controls the later information and realization. The experimental outcomes show that proposed application has been helpful for driver less vehicle tracking easily. This investigation had been verified on various references speeds and managed with 0.04 meters.


Introduction
The World Health Organization evaluated road traffic safety in 182 countries worldwide in 2017.According to the assessment, about 1.24 million people die per year in road collisions around the world, and almost 50 million people are injured.The effect of these deaths and accidents on the victims' families is immeasurable.It has wreaked havoc with their life and even their careers.If no action is taken, road traffic deaths are projected to become the world's seventh leading cause of death by 2030, according to the World Health Organization.According to estimates, human causes are responsible for more than 90% of road fatalities, including breaches of traffic laws, exhaustion caused by long-term repeated driving, deficiencies of human drivers' vision, and congenital delays in driving emergency response [1].The primary causes of road traffic collisions are improper service conduct and other factors, which also contribute to a slew of traffic issues, including traffic jams.To prevent these issues, vehicles must have advanced functions such as self-identification of routes, self-planning of travel directions, and self-control of driving, which enable drivers to be freed from complicated environmental knowledge and inefficient driving behaviour, resulting in a safer driving experience.Improving driving vehicle safety technologies and efficiency, as well as reducing road traffic injuries, has become a social problem of common concern for governments and academic agencies, as well as one of the main challenges facing science and technology growth [2].
Unmanned vehicles are mostly used to increase road safety, minimize traffic congestion, and reduce vehicle fuel consumption and emissions.Many countries around the world are financing research into autonomous vehicles and intelligent transportation technology, especially in the fields of driverless vehicle direction monitoring, lane maintenance, and vehicle lane change.The aim of route tracking is to get the vehicle to follow the desired path while maintaining its lateral stability.The secret to route tracking is its control algorithm.For a self-driving car, the route tracking control algorithm is important.As a result, route mapping is an important technology in the driverless vehicle research direction [3].For indoor robots, early route tracking methods such as geometric path preparation and rolling path system are more appropriate [4].The route tracking system described above is not applicable to the driverless vehicle since it is a non-holonomic restricted vehicle body that is constrained by turning radius, angular speed, and other factors.As a result, the method of unmanned vehicle route monitoring has become a hot research subject among related academics.To control the driverless vehicle running along the target path, Jeon et al. used the shift rate of lateral deviation and lateral deviation as inputs and the former wheel rotation angle as outputs of the fuzzy controller [5].Adaptive sliding mode controllers, for example, can be used to achieve unmanned vehicle direction monitoring and reduce control system jitter and external interference using Lyapunov stability principle [6].With the aim of reducing lateral deviation, Ojha et al. used model predictive control and forward feedback control to achieve four-wheel steering driverless vehicle direction monitoring [7].Depatla uses rigorous H output feedback control to map the direction of the driverless vehicle without taking into account its lateral speed.Simultaneously, modeling tests are conducted.However, there is a substantial difference between theoretical and real-world findings [8].
Path monitoring for autonomous vehicles consists predominantly of path identification, steering control, and speed control, all of which are based on video image processing.Various methods of unmanned vehicle route tracking presented in the above literature ignore video image analysis, resulting in a vast amount of interference detail and significant distortion in the initial video image, preventing the processor from being directly used and reducing path tracking accuracy.In this article, a video image processing-based route monitoring system for driverless vehicles is presented.The driverless vehicle collects preview data from video image processing technologies and creates a preview point sequence search model based on the current pose and relative motion relationship between the preview point sequence and the road.It predicts the curvature shift of the direction using a multi-point preview technique and manages the vehicle according to those laws.The control quantity of front wheel rotation angle is determined using the Pure Pursuit algorithm [9] to control the steering of a driverless car.Finally, laboratory research is used to test the feasibility of the proposed monitoring system.

Wave filtering
The road state photos captured by the camera provide a lot of distortion detail due to the effect of noise.The image is smoothed using a machine filtering technique to reduce the interference of noise on image quality.The gray value at the noise stage is replaced with the median value of gray value using median filtering.This approach will preserve the image's edge while still filtering out the noise.The effect of filtering is superior to that of mean filtering.The median filtering approach is chosen in the video image processing technologies of monitoring unmanned vehicle route because the resulting contour extraction demands high image quality [10].

Binarization
The filtered image should be binarized to obtain a binary image in order to minimize computational complexity, save processing time, and retrieve road information more intuitively.The image's gray value and threshold should be compared, and the image should be split into two parts: the target image and the background image [11].The choice of a threshold is crucial.Too many reference points are misclassified as background if the threshold is set too high; too many background points are misclassified as targets if the threshold is set too low.Static and dynamic thresholds are the two types of threshold.Figure 1 depicts the basic procedure.T denotes the threshold, I the image row, j the image column, and Image[i][j] the gray value at the top of the graph (I,J).The value is negative because the gray value at this point is smaller than the threshold, meaning that the image at that point is the background; on the other hand, the value is 1, indicating that the sum of data for the target image is decreased by 1, and jmax is equal to the sum of data for j by 1.
Binarization requires selecting a complex threshold that adapts to the driving condition of an autonomous vehicle in real time.The dynamic threshold is determined using the Otsu procedure, which involves choosing the best threshold from 0 to 255 in order to optimize the variance between the context and target sections of the image.
The greater the distance between the two sections of the picture, the better the threshold [12]; on the other hand, the threshold selection is irrational.As a result, maximizing the difference between groups reduces the likelihood of misclassification, and the Otsu process is easy to measure, time-consuming, and unaffected by image brightness and contrast [13].Calculate the input image's normalized histogram pi, which is: Goal image and context image are defined by ni and N, respectively.
The calculations above are used to measure the global gray values.
For k=1, 2,... 255, calculate the inter-class variance  2 B (k): The optimal dynamic threshold is obtained to maximize the k of The gray value at the trough between the two peaks is chosen as the threshold value if the gray level histogram is explicitly bimodal.This approach fits best for photos that have obvious double peaks and a deep valley bottom.
The histogram with no visible double peaks or a big flat valley bottom is not useful for single peak histogram [14].The picture is often influenced by noise in realistic applications, resulting in two distinct peaks.Because of the dynamic and changing road conditions encountered during the driverless vehicle's driving phase, the Otsu approach was selected to measure the threshold.

Contour extraction
The computational complexity and processing speed are also high as all binary images are analyzed and stored.
The four-neighborhood approach can be used to extract the boundary of a binary image in order to minimize the interpretation and processing steps and speed up the response speed of video image processing.If the current pixel value is 1, the current binary image is the target image; if the binary image's four pixels are all 1, the current pixel value is 0, and the binary image is the background image; otherwise, the current pixel value is unchanged [15].This method is capable of efficiently retaining the road condition data needed by autonomous vehicles.
After obtaining the necessary road condition information using video image processing, the unmanned vehicle uses a multi-point sequence to obtain route preview information in the road condition information.The route preview information includes lateral location deviation, preview deviation angle, path curvature, and so on, and helps the unmanned vehicle to follow its path.

Unmanned vehicle path tracking 2.2.1 Relative motion model of unmanned vehicle-path
The relative motion model of a driverless vehicle-path is defined after simplification and abstraction, as seen in Fig. 2. The center of the local coordinate scheme XO'Y is the midpoint of the rear axle, which defines the forward orientation of the X axle as the vehicle's direction; is the front wheel rotation angle; is the heading angle, that is, the angle between the vehicle's forward direction and the E axle; v is the forward speed; and L as the wheelbase.Video image processing technology produces a sequence of ordered longitude and latitude coordinates that are converted from the WGS-84 coordinate system to a plane rectangular coordinate system.The origin of coordinates is then used to define the global coordinate scheme, with the starting point of the trajectory point series serving as the origin.In the global coordinate scheme, the coordinates of the trajectory point of the driverless vehicle are (E(i), N(i), where I is the series number of trajectory points.
The above point series represents the target route that an unmanned vehicle is tracking.The kinematics model of an autonomous vehicle can be expressed as follows if the position coordinate of the midpoint of the rear axle of the vehicle is (ge,gn) in the global coordinate system:

Preview deviation angle and path curvature
The driver's eyes continuously preview the forward path when manual driving, deciding the vehicle's course, angle, and speed based on the appropriate details of the forward path, so that the vehicle will approach the forward path as far as possible [16].By relating to manual driving action, the definition of path curvature and preview deviation angle is adopted for the issue of unmanned vehicle path tracking control.By looking at the literature, the angle between the moving path of an unmanned vehicle and the line connecting the preview tracking point and the current location point is known as the preview deviation angle.Simultaneously, it is discovered that the lateral control problem of direction tracking can be turned into the tracking problem of preview deviation angle [17] by actual study and research.To explain the curvature shift at the preview monitoring point of the goal path, a multi-point preview technique is proposed.The other preview points are only used to define the curvature shift of the path and obtain the curvature of the front path, in addition to choosing a preview point as the tracking point on the target image path after video image processing.First, as seen in Fig. 3, the preview point search algorithm model is defined.

Figure 3. Model of preview point search algorithm
In Fig. 3, Zj is the preview point series obtained on the target direction, that is, the trajectory point sequence, and ed is the lateral position deviation, that is, the distance deviation between the actual position and the tracking path trajectory.The trajectory point sequence is converted into the local coordinate system XO'Y, then the coordinates of the trajectory point sequence in the local coordinate system are expressed as (X(i), Y(i)), and the coordinates obey the equation: The following are the measures for finding preview monitoring points Z1: First, the trajectory points are transformed into a local coordinate system using formula (8), and then the nearest points are found in the sequence of trajectory points describing the target path, which is the starting point of this search; second, one point is found in the sequence of trajectory points in turn, starting from the starting point, in the direction of the vehicle body, to satisfy the fol.
z1 is the ordinal number of points in the sequence of trajectory points that satisfy formula (9) in the formula.This is preview tracking point z1, which can be used to complete a quest for preview tracking points.Repeat the steps above as the car drives to a different location and start a new quest.Following proof, it decided right away.The driver is found to regulate speed primarily in response to changes in road curvature [18].As a result, finding the remaining multiple preview points zj, where j=1,2.... N, specifies the degree of curvature of the route across multiple preview points, is needed to monitor the longitudinal speed of the driverless vehicle.The curvature shift of the goal direction can be represented with a split line in the local marking scheme X O'Y, as seen in Fig. 4.

Figure 4. Schematic diagram of path bending degree calculation
The degree of path curvature at the preview point series is defined by the path curvature C.
||j+1 -j | denotes the relative difference of the tangent angle, which is used to define the change of the curvature of the path and the degree of curvature of the track, and j denotes the angle between the tangent at the preview point zj and the moving direction of the driverless car.When the path's course varies unilaterally or swings left and right, the curvature of the path increases.Among them, zj can be chosen by an equivalent interval number, which means that a preview point can be chosen by a particular number of path sequence points at each interval.The number of preview points is determined by the sparsity of the sequence points that define the goal direction.The sum of Euclidean distances between interval points can be used as a criterion for determining the number of intervals.

Implementation of path tracking
The driverless vehicle route monitoring is done based on the preview departure angle and path curvature.The front wheel rotation angle and longitudinal speed are the key control variables in the route tracking operation.The unmanned vehicle control system is a typical timedelay, nonlinear, and chaotic system, and the preview control action has apparent predictability, which is clearly superior to the standard control algorithm based on input feedback [19].Figure 5 depicts the proposed route tracking algorithm's structure.Preview point distance preview point is calculated using the preview point's knowledge.Determining the preview distance as well as the lateral and longitudinal control speeds is more significant.The following is a brief description of how the preview distance and lateral and longitudinal control speeds were determined.(1) The preview distance must be determined.
The preview distance has a direct impact on route tracking accuracy, so choosing the right one is crucial.Because of the shorter preview distance, the driverless car can track the direction more precisely and with greater curvature.The greater preview gap reduces the driverless vehicle's overshoot during detection and improves tracking reliability.The preview distance can be calculated using the driverless vehicle's longitudinal rpm.Furthermore, since the preview distance is normally saturated at the minimum and limit, analytical formulas can be used to express the relationship between the preview distance and the longitudinal speed of an unmanned vehicle: lmin and lmax are the minimal and maximum preview distances, respectively, and an is a constant in the formula.The formula above can be used to measure the preview distance.
(1) Direction curvature-based longitudinal regulation j can be written as in the local coordinate system.
The coordinate of the overview point in the local coordinate system is (Xr,Yr) in the formula.Just considering the effect of curvature adjustment on vehicle speed after measuring the curvature C from formulas ( 10) and ( 12), the larger the C, the smaller the vehicle speed; conversely, the greater the vehicle speed v.In certain circumstances, vehicle speed may not surpass a certain value of vmax.As a result, in order to ensure that the speed v decreases substantially as the curvature C increases, the vehicle speed is calculated as follows: ck is stable in the formula.If the path is known, the curvature C at each point along the path, as well as the maximum and minimum curvature Cmax and Cmin for the whole path, can be measured offline, the ck collection range is Cmin< kc<Cmax.
(2) The Pure Pursuit algorithm is used for horizontal power.
The deflection angle of the front wheel is calculated using the geometric relationship of the preview deviation angle, with the midpoint of the rear axle as the tangent point and the longitudinal symmetrical axis of the driverless vehicle as the tangent line, so that the driverless vehicle will travel along the arc passing through the preview point and the preview deviation angle tends to zero [19].We can get the following result by using the sine theorem: Where R is the radius.Formula ( 14) can also be expressed as: The interval between the current location and the preview point Z1 is denoted by dl, and the arc curvature is denoted by.The front wheel angle can be expressed as follows using the simplified Ackerman vehicle model: The control quantity of front wheel rotation angle dependent on the Pure Pursuit algorithm can be calculated using formulas ( 14) and ( 15) as follows: Since the implementation of the formula (18), there is only one customizable parameter in the formula ld= p. cos , which renders the algorithm simple to apply and modify.

Analysis of the effect of road information processing
The road condition information before and after image processing should be measured in order to research the validity of image processing technologies in this article, and the comparison findings are shown in Fig. 6.As can be seen in Fig. 6, there is a lot of in the initial road state picture, which makes route information extraction difficult.The noise points in the image are clearly minimized by using the median filter in this process to filter out the noise points, and the image's original contour boundary is preserved.Binarization will easily isolate the object from the context.At the same time, the contour extraction method not only preserves road feature information, but also significantly reduces the amount of image data, demonstrating that the method presented in this paper is effective in processing video image data.

Analysis of path tracking effect
The effect of this approach on experimental driverless vehicle route monitoring is evaluated using two techniques.Detecting the difference between the driverless vehicle corner command and the real corner is the first step in judging the impact of driverless vehicle route tracking using this process.Figure 7 depicts the test findings.As seen in Fig. 7, the difference between the estimated angle instruction and the real angle is very slight throughout the trial, indicating that this method's control precision is high and that it has a positive impact on driverless vehicle route detection.
Second, beginning with the speed of the driverless vehicle, the influence of this approach on the driverless vehicle's route tracking is checked.To assess the effect of this approach on unmanned vehicle path tracking at different speeds, the effects of unmanned vehicle path tracking at low speeds of 18 km/h and high speeds of 93 km/h are compared.Figure 8

Energy consumption analysis of vehicle route tracking based on this method
It is important to compare the energy consumption of the proposed system with the energy consumption of the vehicle path tracking method based on Fuzzy annealing and the energy consumption of the vehicle path tracking method based on neural network in order to validate the low energy consumption of driverless vehicle path tracking under this method.Many tests are needed to enhance the precision of this experiment, and the results are shown in Table 1 The energy consumption of the system in this paper is 368J, the energy consumption of the vehicle path tracking method based on fuzzy annealing is 621J, and the energy consumption of the vehicle path tracking method based on neural network is 991J, according to the data in Table 1.In contrast to the two standard processes, the highest consumption benefit is slightly lower.This system uses 355.13J of energy on average, which is far less than the two conventional approaches.Since the method in this paper uses the four-neighborhood method to extract the boundary contour of the binary map, thus obtaining the requisite road condition feature information, it can be inferred from the above data analysis that the method in this paper can save energy.The preview deflection angle and path curvature can be determined using the collected data, and accurate monitoring control of the unmanned vehicle's path can be accomplished, decreasing the risk of path deviation and lowering energy consumption.

Discussions
The following are the benefits of the drone route monitoring system based on video image processing suggested in this article, according to the above analysis: 1) The captured road video image is screened using this process to reduce the effect of noise on image quality.Binary image processing reduces computation time, extracts road information more intuitively, selects complex threshold in the binarization phase, and can monitor the driving environment of unmanned vehicles.The complex threshold can be adjusted in real time and has a high adaptability to the context.The dynamic threshold is calculated using the Otsu equation, which makes it easy to discern between the image's context and objective, allowing the corrected image to correctly restore road condition detail.This image processing technology can respond to a variety of challenging road environments and provide content assurance for subsequent preview point acquisition.
2) The majority of conventional route detection techniques pay no attention to unmanned vehicle horizontal and vertical control.The approach described in this article, on the other hand, uses the preview deviation angle and path curvature to track unmanned vehicles.The front wheel rotation angle and vertical speed are the two most significant control variables in the route tracking process.The vertical and horizontal control of the driverless car is achieved by extracting several preview points on the goal path to obtain path preview information.

Conclusions
In this work an automatic route tracking system is designed for driver less vehicle application.The random pure matching and adaptive median filtering algorithms are helpful for road tracking applications.This research work is used to identify the mad mutes and avoiding accidents as well as easy environmental driving.The conventional models cannot provide accurate driving experience, our proposed automatic path tracking continuously giving an efficient driver less vehicle travelling experience.

Figure 2 .
Figure 2. Relative motion model of driverless vehicle and path E, N, and O are global coordinates, with the E axis pointing eastward and the N axis pointing northward.The center of the local coordinate scheme XO'Y is the midpoint of the rear axle, which defines the forward orientation of the X axle as the vehicle's direction; is the front wheel rotation angle; is the heading angle, that is, the angle between the vehicle's forward direction and the E axle; v is the forward speed; and L as the wheelbase.Video image processing technology produces a sequence of ordered longitude and latitude coordinates that are converted from the WGS-84 coordinate system to a plane rectangular coordinate system.The origin of coordinates is then used to define the global coordinate scheme, with the starting point of the trajectory point series serving as the origin.In the global coordinate scheme, the coordinates of the trajectory point of the driverless vehicle are (E(i), N(i), where I is the series number of trajectory points.The above point series represents the target route that an unmanned vehicle is tracking.The kinematics model of an autonomous vehicle can be expressed as follows if the position coordinate of the midpoint of the rear axle of the vehicle is (ge,gn) in the global coordinate system:

Figure 5 .
Figure 5. Path tracking algorithmAfter video image analysis of an unmanned vehicle's target path image, the path information is collected according to the processed image acquisition path trajectory, and the path information is translated into road point coordinates, as seen in Fig.5.Searching for a preview point in the road point coordinates yields preview information such as preview angle and path curvature.Preview point information determines the speed and longitudinal orientation of an autonomous vehicle's front wheel.The speed is regulated, and the distance to the

Figure 7 .
Figure 7. Corner instruction and actual Corner difference

9 .
depicts the course mapping of an autonomous vehicle at two speeds.(a) Original road condition image (b) Median filter image (c) Binarized image (d) Contour extraction effect drawing Figure 6.Comparison Results (a) 18 km/ h lateral position deviation (d) 94 km/h longitudinal position deviation Figure Position and position deviation of different speed

Table 1 .
. Comparison of energy consumption/J