Detection of Moving Vehicles on Highway using Fuzzy Logic for Smart Surveillance System

: The computer vision incorporates a prominent performance in developing models for medical, security and lot more concepts and we need to deal with detecting and tracking the moving objectsuch as moving vehicle or person. Various challenges are available due to environmental issues, illumination variation, or fast motion etc. This paper work has developed a fuzzy logic based method for identifying the moving vehicles though bounding boxas depicted in the green colour. The paper depictsa fuzzy based method to detect and track the object around the location until the specific secured dimension of the device. The extended recognition (EIR) is the proposed method which works on the automatic fuzzy set creation.The EIR consists of the pixels of the surroundings and recognize them with the predefined inputs we have with the EIR algorithm in the repository. This methodology successfully identified and detects vehicles. It can track the instances within that visible region and this is working as a human eye mechanism.


Introduction
Computer vision is the main focus in artificial intelligence now a days because of rapid growth identified in medical imaging and the security implementations in different regions for the different purposes. The image classification in a location is the major task which can be identified by the simple mechanism and the main task lies in identifying and tracking the image or the object which is in the motion. The object with the motion will change the coordinates. The coordinates of the image will be tracked using the computer vision but there is a problem in identifying the points of the image and there must be an improved mechanism which can help the researchers to identify the moving objects with better accuracy and the modelling will help the researchers to find the objects which are in motion and which are not in motion. Artificial intelligence will work on the mathematical implementation of the landmarks of the objects which are in the location. The objects need to be identified when they are in standard position and also when they are in the kinetic position. The position can be considered as the state (S). and the moving positions will be updated to the state Si. By using the map of the location, we need to track the S and Si values.
The instance of an image is considered as below, figure 1: and there is a chance of identifying the patterns in the image by making the grey image of any coloured image. Every coloured image can be used to identify each instance of changing of position of objects. There are two scenarios we need to consider here. The objects which are not moving for a time bound and the objects which are moving within the time bound. The classification mechanism can be used to identify the objects in the region and need to identify the objects which cannot move and which can move.
For an instance we are considering the figure 1 below which is showing the object identification based on the input feed the machine got from the developer. Figure 1.Image identification based on the input search Based on the input search the image was identified in the specific image and we need to consider the process of separating the features based on the colours [1].

Research Article
For an instance consider the following image (figure 1) which contains two objects and one background. S -> Total environment S(1,2,3………n) -> Number of objects in the environment If(S(i)) [0] -> S(i) [1] then the object focused is movable If(S(i)[0] = 0 then the object is not movable. Before considering the image classification we need to identify the list of objects in that frame and which can be moved and which cannot be moved. In the current generation of the technology development all the developers are searching for the way to identify the moving objects which are categorized into two major parts. I) Automated ii) Partial Automated. The first one which is automated is also further classified into fully automated or not. In recent times these two methods became more important in traffic signals, medical applications, road safety etc. These implementations are more focused on the traffic safety like implementation of the methodologies using the fuzzy logic which is a mathematical operation in which we can perform the implementation of the identifying the location based on the pixel value of an object. This implementation is mostly used and helpful in identifying the vehicles which are over taking the traffic rules at signals. There are two more mechanisms to identify the object and classify it. Those two are image processing which is being done by the various digital image processing mechanisms like MATLAB and OCTAVE and other mechanism is like radar technology which can track the sonic waves of the vehicle or the objects. Image classification is having the major part of image processing which divides the environment into 24 different frames which can further used for the classification [2]. The radar operations are quite complex because itwon't stop until the user stops the operation. That means, if the driver jumps the signal and the operation was tracked by the radar and keeps on tracking the object or the vehicle until the driver stops the vehicle. This is a quite complex task. Till now we have the services to store the video which got stream or recorded using the external devices but there is no proper approach for the analyse the density of the video or the object and identify the moving things. The proposed methodology can work on the things like density identification (Upper, Middle, Lower). This will be a relative density which can work on identifying the aspect ratio of the object. We would like to implement the several stages of the classification methodologies which can help to identify the accuracy of the algorithm in combination with density and EIR (Extended Image Recognition). Both these work on the accuracy of the aspect ratio identification. In the next section the different approaches are explained, in the later section different kinds of processing's are explained clearly. In the lateral section literature review of the previous work will be explained. Then results will be explained clearly with sample output of the proposed systems [3].

Different Approaches
There are four kinds of approaches and for each approach we are maintaining different kind of scenario to map the results. In the first approach we discuss about the dimension of the feed, in the second approach we have to consider the count of the camera at a time processing, in the third approach we need to consider motion of the camera and in the third approach we need to consider offline or online mode of the implementation. This kind of approaches will have the individual importance in building the computer vision algorithm and the model for the task we are considering.

Dimensionality
The dimensionality is the concept of whether we are considering the two dimensional or three-dimensional image which can be considered as the feed for the algorithm. In this scenario we need to consider the algorithm which can be solved using the following formula.
#if the object count got increased or any object detected in that frame, we will mark the live measurements of the object which is in the motion count = count + 1 end end end When the count of the variable is gathered from the feed then we can consider the procedure of implementation of the classification using machine learning.
In the same way we need to consider the third dimension as the input and also the implementation lies in the second category. That is count of the cameras.

Count of the camera
We need to understand that from how many cameras we need to consider the feed. In this feed. We need to consider only from one camera if the chance of risk is low and if the chance of risk is high then we need to consider the multiple cameras for the feed gathering. R[i] is the risk factor for I = 0,1,2,3,4 The risk value i is defined with five values and based on the risk factor we can choose the feed from different cameras.

Camera type
We need to identify which camera type we are using whether it is static or dynamic. Static is like CCTV but the dynamic is like drones.

Feed type
Need to identify whether the feed is online or offline feed. We need to consider the network capacity to store the data captured and then we can submit that feed to model. If the feed is online then the algorithm of computer vision will work continuously. If the feed is offline after some stipulated time bound the feed is submitted as the input of the model.

LEVEL OF PROCESSING
In this we have different levels of processing the models. Those are mentioned as follows with the pseudo code.

Low Level
In this low level processing, we can consider the values of the RGB. The colour pixel values and consider the image for further evaluation of the processing. In this low level we perform two tasks.
• We save the computational level in the colour image • Decreases the noise in the data These two will be performed with the low-level processing:

High Level
This is the level where we are implementing machine learning methodologies. We are implementing the methodologies like decision trees, random forest, neural networks, frequency domain calculation and control chart approach.

Decision Trees
Using decision trees is the one of the finest choices of implementing the object detection using computer vision. Because of the rules we generate while doing any kind of task we need to consider the rules we composed to solve a problem. The rules we composed will help the model accuracy and the procedure lies in the method of activating the process level. Whether we are using high level or low level.
If the decision trees implemented using the low-level processing then we can consider the moves the person or the object done starting from the feed recording. Then we need to consider the single dimension or the two dimensions and there will be no need to three dimensions. If the feed is taking from the high-level processing then there will be a point of considering the third dimension.

Neural Networks
Neural networks work well in this field and there will be no chance of implementation of basic models if the neural network works well in the prediction scenario. The perceptron generation and implementation of the hidden Research Article layers will plot the rules in the correct way and the result of each layer will be the input for the next layer as we are considering RNN (Recurrent Neural Networks). In this we need to consider the features based on the back propagation and the implementation will give the accurate result of the algorithm when we are using large number of the internal hidden layers instead of the few. If the features are 10 then we shouldn't make the less hidden layers count. To get the accurate result for the neural network we need to consider the large number of hidden layers.
Upper bound and lower bound are the combinations of mean and standard deviation values with respect to the variables considered. If the variables are in 2D then i,j will be considered are the input variables and if there is any camera count change then we need to consider the 3D values of the fuzzy set. We need to consider the aspect ratio using the following image. We achieved the following methodology result with respect to the aspect ratio analysis.  Lot many researchers are working on the same concept of identifying the object which is in the motion and we need to consider one sample example before going deep into the concept of the literature review and the proposed algorithm.

Research Article
Army is the backbone of the country and we need to protect them from the enemies. We need to identify the objects which are in motion in the path of the army people so that we can protect them prior. There is a concept called air borne network. In this concept a radar is attached to the plains and the army people kit so that if there is any object moving in their path against them, we can track and alert them in the form of security. Bu there is a problem that we cannot track the people or object under the colour green. Because green colour is quite contrast to the skin colour of the human so human colour cannot combine with the green, that's the reason for using green screens in VFX operations.
[4] Algorithm for the moving object identification in which the authors mentioned three different concepts like frame differencing, subtract the background from the foreground and Optical flow. The three operations will help to identify the object with respect to the background and foreground differentiation. The other work mentioned in this article is identifying the objects from other objects. That means differentiation of the objects from other. Figure 4. The architecture using currently to understand the object motion [5] Identifying pixels which can classify the objects based on the impact of the pixels in the frame is the operation done in this article and authors focused on differentiating the objects based on the pixel values. For every image in the frame there will be an unique pixel value. There is a problem with this concept and that was explained in the proposed algorithm section.
[6] Comparing the video footage can help the identification of the moving objects in the frame. This is the concept of PCA which is an unsupervised mechanism using which we can identify the matrix values , that means 2D or 3D in which format we need to consider the input and based on that input we need to consider the hidden layers of the neural network. [7] Distance based classification between the objects in the moving is the operation done by the researchers in this article and they tried to implement the K-Means algorithm which can track the Euclidean distance between the objects which are in the motion.
[8] CMOS algorithm is implemented for identifying the moving objects in the frame and the frame consists of the edge values which can track the values of the pixels and the distance parameters of the objects as mentioned in [7] we have the implementation of the edge detection of the images and the videos so that we can track the images or the objects which are in the motion.

Disadvantages of the existing methodologies
The main disadvantages of the existing approaches are, they consider the distance parameter and the pixel parameters of the objects which are in the frame and not focusing on the dynamic moments of the pixels from time to time. The time stamp of the moment must be gathered and this is missing from the existing approaches [9].

Proposed Algorithm
The proposed algorithm consists of three different scenarios and those three will be combined at the end of the implementation [10]. The proposed methods of the current paper are: I. Identifying the objects in the frame II.
Identifying the type of objects in the frame III.
Implementing the EIR method on the frame and objects with machine learning and fuzzy models In the first method we need to consider the frame completely as mentioned below.

Figure 5.Consider the complete frame which comes into vision
The information we gather from the frame will be used for analysing the objects which are in the motion and which are not in the motion. The objects which are in the motion will be tagged with a variable to track those and the objects which are not moving will not be considered for current review [11]. In the second stage we need to identify the objects which are in the frame, but the representation in the output will be only the objects which are in the motion. The reason for considering the complete objects in the frame but not displaying is the objective of the research is to track the objects which are in the motion. If the objects which are in the motion are already recognized in the step 1. In the third stage we will identify the purpose of the moving objects and what are the kinds of objects they are. Whether it is a human, or an animal etc. We need to classify. Meanwhile in the second stage the secondary inputs will be given by the pre-classified results [12]. For example, if the object is not moving and we need to identify which is that object not moving. We need to analyse the object type. For that purpose, we need to use computer vision methodology which is already done pre-defined. In the third stage we will differentiate the methodology [13][14].
The pseudo code is as follows which explains each step. In each step we get an output and that will be processed as input the next stages.

Pseudo Code 1 for Identifying objects in entire frame:
Step 1: Consider the image from the camera Step 2: Identify how many numbers of cameras are available to give the feed related to a frame Step 3: Identify which feed is clear with good pixel value and store the information Step 4: Update the feed in each second Step 5: Repeat steps from step 2 to step 4 Step 6: Form the dataset or repository with relevant information Step 7: Pass the dataset to Procedure 2 Pseudo Code 2for identifying type of objects in the frame: Step 1: Take input from the Procedure 1

Research Article
Step 2: Consider the secondary feed of the modelling (Classification) Step 3: Identify the objects which are movable Step 4: Make a separate list of the pixel positions of movable objects Step 5: Identify the objects which are static in nature Step 6: Make a separate list of the pixel positions and make it non-modifiable Step 7: Continue the same procedure from Step 1 to step 6.

Pseudo Code 3 for the EIR implementation with Computer Vision
Step 1: Consider the feed from procedure 2 Step 2: Implement EIR on the data Step 3: Process the computer vision mechanisms which can track each moving object in the extended frame Step 4: Make the decision on the feed Step 5: Repeat the procedure The above are the process pseudo codes for the three procedures which can be helpful for identifying the purpose of moving and what will be result of moving against the threshold value.

Threshold Value
The threshold value is the base value we are allotting to every kind of object using fuzzy set. For example, F(X) is the fuzzy set with the universal variable X and then we need to allot individual threshold value for the objects based on their types [15]. The threshold is based on the category of the object. The category is pre-defined and the threshold will be based on that category of the variables [16][17][18].

Research Article
Based on the threshold value we need to identify the purpose of moving and this threshold states that how many times that object moved in a stipulated time. This is be based on considering the average value a human can register while moving [19].

6.Results
We performed some of the implementations using the different machine learning methodologies and the computer vision algorithm and also some of the samples we collected while performing the task in MATLAB will be described now. We are plotting the results of using the different machine learning models result. Decision trees got the highest accuracy and also random forest have the nearest value with decision trees. KNN is may have much accuracy if the features are more accurate and that will be our future scope of implementation.  Figure 8, will explain the procedure 1. Figure 9 and figure 10 will explain the output we acquired by using MATLAB to implement the EIR modelling with computer Vision.    The final result is to identify the threshold of the object and what is the purpose of moving. If the object is identified as the suspicious then it can be informed to the higher officials for the security purpose. This EIR method with computer vision will help to identify the objects which are in motion in a certain frame those can be helpful for the security reasons.

Research Article
The research previously done on this similar concept is not related to the issues like, tracking the live images or the objects which are in the motion with in the frame and also there is chance of identifying the non-motion objects with in the frame [20], because we can track the data and use that data for the better improvement. For an example, in the existing formats we cannot track the objects which are in the non-authorized location. That means, if the project is like tracking the vehicle which is in unauthorized location, then we can identify better chance to detect and track the vehicle using the proposed system, but the similar kind of approach is missing in the existing approaches.

Conclusion
The computer vision algorithm worked on different domains to show the reality of understanding the video frames and the insights of the problem. The real problem recognized before doing this experiment is to recognize the object using fuzzy logic and it makes some sense to track the object whenever it appears, in the traffic. The major approach of EIR model is to recognize the entire frame of the recording where this work can recognize the images which are in motion and which are not in motion. The EIR model works based on the three categories are mentioned on the proposed system and the implementation will be given only to the two approaches in this current article and as we focused on the objects which are in the motion. This work recognizesthe motion in specific frames. EIR does the task perfectly by recognizing the objectsthat is in motion in that frame with some specific dimensions.