Coral Reef Image Classification Employing Deep Features and A Novel Local Inter Cross Weber Magnitude (LICWM) Pattern

Coral reefs are essential in marine ecosystem as they sustain a great part of marine life. The automatic classification of corals on submarine images is so much important in recent times. Hence, it can assist marine experts to classify endangered and susceptible coral reefs. But, classifying coral reef images is a promising task due to its varying color, texture, shape and morphology. The main objective of this work is to propose a novel operator called Local Inter Cross Weber Magnitude (LICWM) pattern. For classification, VGG–16 architecture is applied for extracting the features of coral reef images. VGG16 architecture has many layers for extracting deep features effectively. The traditional methods used K-Nearest Neighbor (KNN) and Random Forest (RF) for classifying deep features. The performance of the proposed method is estimated using F-score. The Experimental results show that the proposed operator achieves better accuracy level and performance with EILAT and RSMAS data sets.

due to its robustness and discriminating capabilities. Agarwal et al [5] have investigated Webers native Descriptor (WLD) to recognize gender. Minimum Distance Measurement and neural network is used for classification. Amit Satpathy et al [6] have proposed Discriminative Robust Local Binary Pattern (DRLBP) and Discriminative Robust Local Ternary pattern (DRLTP) for object recognition. It solves discrimination between light object backgrounds against dark background.
Jinwang Feng et al [7] have extracted an image features using dominant Complemented modelling of the traditional Local Binary Pattern (LBP) operator. It gives best results than the other LBP variants. Guo et al [8] have proposed Completed LBP (CLBP) with three operators, namely CLBP_C, CLBP_S and CLBP_M for texture classification. Chang Pei et al [9] have implemented two descriptors such as LBP and Weber Local Binary Pattern (WLBP) to recognize gray scale images, depth images and 2D + depth images. The processing time is improved with 90% accuracy. Ahila et al [10] have extracted the features of an image by WLD which is the composition of differential excitation and differential orientation. SVM based classifiers are used for classification. Fan Liu et al [11] have presented Weber Local Binary Pattern (WLBP) which is the combination of WLD and LBP. The future work is to reduce feature vector and to improve the computing efficiency. Zuodong Yang et al [12] have proposed Weber Local Pattern (WLP) and Weber Ternary Pattern (WTP) based on Weber's law for evaluating local gray scale difference. It reveals good scale difference. Thanh Phuong Ngugen et al [13] have presented a texture framework called Statistical Binary Patterns (SBP) by applying rotation invariant binary patterns to images. Ngoc-Son Vu et al [14] have used Patterns of Orientation Difference (POD) for exploring relationships between gradient orientation and magnitude for various image structures. Here, the whitened Principal Component Analysis (PCA) is used as Descriptors.
Mohammad  Hatibaruah [19] has introduced novel feature descriptor called Elliptical Local Binary Co-occurrence Pattern (EBCOP). It is the combination of sparse Elliptical Local Binary Pattern (ELBP) and Grey Level Cooccurrence matrix (GLCM). It shows significant improvement than other feature descriptors. Ani Brown Mary et al [20] have also implemented feature descriptor and machine learning techniques to classify the diseased images of coral reef. Padma Priya [21] has classified coral reef images with hybrid methods of traditional methods with deep learning. In that analysis IWBC achieves best performance than other traditional methods of LBP and LAP. Padma Priya [22] has also proposed VGG 16 deep layered architecture for feature extraction of coral reef image. It achieves best performance than LBP and LAP techniques.

METHODOLOGY
The methodology implemented in this proposed method is a novel operator, namely Local Inter Cross Weber Magnitude Pattern derived from Improved Weber's Local Descriptor (IWLD) which is based on Weber's Law. It states that the change can only be identified if the rate of the change of stimulus to original stimulus is high enough. According to Weber's law, two components are used to determine the texture of the images with differential excitation and orientation. To avoid the discrimination between the pixels of an image, the proposed operator is introduced. It is an efficient, robust, and discriminative feature descriptor. VGG net architecture is also integrated with this proposed operator to extract deep features of coral reef image and to classify one class from another. It gives better result with its hybrid architecture which is depicted in the Figure 1.
Two data sets which are used for experiments are RSMAS and EILAT. Figure 1 shows that the input images are extracted using the hybrid methods of the proposed operator with deep learning method. VGG-16 is used for deep learning which extracts the features effectively when combined with the proposed operator LICWMP. KNN and Random Forest is used for classification purposes after feature extraction.  (1) Weber Difference along y direction is estimated as shown in Eq. (2).
(2) Eq. (1) and Eq. (2) represents the angles between x direction and y direction. Here, represents the centre pixel and represents the neighboring pixels. The differential excitation occurs here is the relative intensity difference between the current pixels with its neighbors. The Weber magnitude is represented in Eq.
(3) C.Padma Priya 1 , Dr.S.Muruganantham 2 348 The magnitude values obtained from these existing IWLD feature is the composition of the RGB channels. In the proposed method, Weber's magnitude is derived from each channel separately.
The magnitude patterns of Weber Magnitude of each channel, namely Red, Green and Blue are derived as LICWMPrg, LICWMPrb, LICWMPgb. First, the magnitude pattern LICWMPrg can be obtained by first comparing the centre pixel value Rc with an improved Weber magnitude. Then, compared with its four red channel neighbors, namely R2, R4, R5, R7. Finally, a four bit code is generated by comparing its centre pixel and its neighbors. If the centre pixel value is greater than the neighboring pixel value then the value of binary code will be 1 or else 0. Thus, binary code values are estimated using Eq. (4).

Figure 4 Illustration of Binary code for IWMr
Next, in this Improved Weber Magnitude Red patterns, centre pixel Rc is compared with the corners of the green magnitude to obtain a binary code. The centre pixel Rc is compared with the corners of the green magnitude of G1, G3, G6, G7. By concatenating these binary codes in anticlockwise direction, the decimal values can be obtained from red and green magnitudes. Using these magnitude values, histogram can be generated. Similarly, all red magnitude values of centre pixel is again compared with the neighbors of Local red magnitude and Inter cross green magnitude to obtain the features of coral reef image. Using these intensity values, histogram is generated. The corresponding process is illustrated by the following steps.

Figure 5 Binary code values generation
Step1: Binary codes are illustrated using Improved Weber magnitude red (IWMr) As shown in the Figure 4, centre pixel Rc value is considered as 45. In IWMr, Rc value is first compared with its neighbor values. For example, here R2 pixel value is 64 compared as, 64>45. Now the binary value of R2 is 1, because R2 value is greater than the centre pixel value. Likewise the steps are followed for all neighboring pixels. Finally the binary code values for the Local red magnitudes are generated as 1100 as shown in the Figure 5. The Figure 5 shows the corresponding binary code values for red magnitude. Step 2: Illustration of Binary code from IWMr and IWMg In step 2 again the Red magnitude Rc is compared with green magnitudes, namely G1, G3, G6, G8 to get binary code as shown in the Figure 6. For example, the centre pixel Rc of red magnitude value is 45, which is compared with the G1 of green magnitude with value 32. Here 32<45. Hence, the Binary code value will be 0 as shown in the Figure 7. Hence the steps are followed for other neighboring pixels, namely G3, G6 and G8 also.

Figure 8 Binary code obtained
This process is repeated for each and every position. Then, the histogram can be determined for LICWMPrg. Similarly, LICWMPrb is obtained by comparing red and blue magnitudes. LICWMPgb is also obtained by comparing green and blue magnitudes. Hence, the histogram is derived for this Local Inter-cross Weber Magnitude Pattern is used to recognise the features of coral reef images.

Figure 9 Encoding red and green magnitudes 3.3 VGG16 Architecture
VGG are commonly referred to as Deep learning network with Deep Layered architecture. It was developed by Visual Geometry Group at Oxford University. Deep features are extracted from raw input images by drifting to several hidden layers. VGG16 is the convolutional neural network architecture with 16 layers. The layers which are present in VGG16 are Convolutional layers, Max pooling layers, Activation Layers, and Fully Connected Layers. This architecture is the composition of 13 Convolutional layers, 5 max pooling layers and 3 dense layers which sums to form 21 layers but only have 16 weight layers.
In Conv1, 64 filters are used. Conv2 has 128 filters. Then Conv4 and Conv5 have utilized 512 filters. The convolution layer is the first layer which takes the input image and applies convolution filter to produce output which is fed to the successive layers. Here, multiple convolution layers are used for sub sampling to provide better overview.
In the proposed work, VGG implements deeper networks which give better extracting features in coral reef images. It is also able to distinguish one image from another class. It shows efficient performance. When fully connected layer is detached from the architecture it can be used to extract features of coral reef images. Again these deep features are classified using the classifiers KNN and Random Forest. The overview of VGG16 architecture is shown in the Figure

. Classifiers
The process of grouping the tested image samples in to its particular classes often termed as texture classification. Each class is related according to its similarity features. The main aim of the classification algorithm is to group the images to its appropriate category. The image classification can be undergone by several classifiers. The classifiers used in the proposed system are, namely K-Nearest Neighbors (KNN) and Random Forest. KNN(x, u) = Where represents the feature vectors. Random Forest is one of the supervised learning algorithms with the collection of several decision trees. It is the most popular model with better results. The best possible results are arrived by generating random uncorrelated decision tree.

Figure 12 RSMAS data set with coral names and sample images 4 Experimental Results and Analysis 4.1 Data set Description
EILAT data set is used in the proposed system with 1123 image samples. The particular coral species are identified by its labels. Eight classes are found in EILAT data set. The eight class coral names are, namely Branches type III, branches type II, branches type I, brain coral, urchin, favid coral, dead coral and sand. The resolution size of the images in this data set is 64 × 64. The sample images are represented in the Figure 11.
The image patches in RSMAS data set are collected from Rosensteil School of marine and Atmospheric Science of University Miami. The size of the image patches are 256× 256. The images are collected from different places using different cameras. These image patches are classified in to 14 image patches with their class names. The coral class names of RSMAS data set are Tunicates, Acropora palmata, Sponge fungus, Colpophyllia natans, Diploria strigosa, Siderastrea siderea, Gorgonians, Palythoas palythoa, Millepora alcicornis, Montipora spp., Montastraea cavernosa, Diadema antillarum, Meandrina meandrites and Acropora cervicornis. The sample images are represented in the Figure 12 Accuracy [24] is defined as a measured value is to the actual (true) value. It is the ratio of the number of correct analysis to the total number of input samples as shown in Eq. (6). Accuracy = Sensitivity [24] is defined as the True Positive Rate (TPR) relays to the fraction of positive data points that are correctly measured as positive with respect to all positive data points as shown in Eq. (7).

Sensitivity = (7)
Specificity [24] is defined as the False Positive Rate (FPR), which corresponds to the fraction of negative data points that are fallaciously considered as positive, with respect to all negative data points as shown in Eq. (8).

Specificity = (8)
F1 score is used to measure the test precision. It is the Mean of precision and recall. It is shown in Eq. (9). Here, Precision is defined as the total correct positive results divided by the number of positive results anticipated by the classifier. F1 Score = 2 * (9)

Classification Results
The Experimental results are summarised in Table 1. Table 1 shows how effectively the coral images are classified with Local descriptors and deep learning methods. The first three rows display the results conducted with LAP, LBP and IWBC. Table 1 proves that IWBC is the best local descriptors for feature extraction with KNN classifier from previous work [21]. In this proposed work the novel operator LIWMP achieve better results compared with the existing IWBC.
First the experiment is conducted with the handcrafted methods and the results are analysed and displayed. Table1 shows the result which was obtained by EILAT data set. The next experiment is conducted with VGG-16 deep network architecture which shows the best classification results.    Table 3. Among all these existing works displayed below the proposed operator with VGG -16 achieve better performance with than other descriptors. The proposed method is used for extracting the features of coral reef images. KNN and Random Forest is used as classifier. The proposed method achieves high accuracy than the existing operators. In the proposed work, the Intercross method is used for feature extraction which covers all directions of pixels from centre pixels.  Table 3 The overall accuracy of the proposed operator with various descriptor CONCLUSION A novel operator called Local Inter Cross Weber Magnitude pattern has been proposed for feature extraction and classification of coral reef images. Experiments has been conducted with the proposed operator are integrated with deep features to improve better performance. Classifiers, namely KNN and Random forest are applied effectively for classification. On comparing the results with LICWMP the proposed operator, the proposed operator provides better results. The proposed method also provides efficient results on comparing with the other traditional local descriptors, namely LAP, LBP, IWBC, CLBP and ILDP. When the proposed method is integrated with VGG-16, its performance has been improved to 98.8%. These Experiments are conducted with EILAT data set and RSMAS data set. This proposed operator LICWMP can also be integrated with various machine learning algorithms for further research.