Performance Analysis of Weld Image Classification Using Modified Resnet CNN Architecture

Abstract: The detection and classifications of weld images is important for improving the quality of the joined materials during production period. In order to automate the classification of weld images in industry, this paper proposes an effective automatic method for the detection and classifications of the weld images into four different cases using deep learning methods. In this work, Convolutional Neural Network (CNN) is adopted for the weld image classification by modifying the internal architecture of the CNN architecture. This proposed ResNet CNN architecture is designed with three Convolutional layers, two numbers of pooling layers with activation layer and two numbers of Fully Connected Neural Networks (FCNN).The FCNN in proposed CNN architecture is designed with 15 internal hidden layers and each hidden layer is designed with 20 neurons which obtains high classification efficiency. The morphological activity functional methods are used on the classified weld images to detect the crack regions.


Introduction
The welding is the important process for manufacturing the different kind of materials. The quality of the material is based on the type of welding [3][4][5]. In general, though there are numerous types of welding available, four types of welds are frequently seen in many materials. They are Good weld case, Excess weld case, No weld case and Undercut weld case. In case of Good weld, the welding regions on the joined materials are very perfect and the weld is lasting for several years. Hence, this type of weld is known as perfect weld for all kind of materials. In case of Excess weld, the welding portion is exceeding on both joined materials. This type of weld is not perfect and not suitable for non-robust materials. In case of No-weld, the weld portion is not there on the joined materials. It is the worst case and hence it should be detected and eliminated during manufacturing process. In case of Undercut weld, the welding on the joined materials is not perfect. Fig.1 shows these four types of weld images used in industry process.

Figure 1 (a) Excess weld (b) Good weld (c) No weld (d) Under cut weld
Hence, the detection and classifications of weld images is important for improving the quality of the joined materials during production period. In order to automate the classification of weld images in industry, this paper proposes an effective automatic method for the detection and classifications of the weld images into four different cases using deep learning methods. In this work, CNN is adopted for the weld image classification by modifying the internal architecture of the CNN architecture. This paper is split into different sections. Section 2 states the conventional methods for weld image classifications in industry, section 3 states the proposed CNN architecture for weld image classifications, section 4 states the simulation results of the proposed CNN architecture and section 5 states the conclusion of the article. Yang et al. (2020) computed multi level featuring parameters from the source weld images and these computed feature ser were analyzed and classified using unified deep learning neural networks. The authors used geometric and intensity based feature extraction process for training the deep learning architectures in this paper. The methods stated in this paper achieved 91.36% of classification accuracy on weld image dataset. ChirazAjmi et al. (2020) used deep learning method for detecting the cracked images in weld materials. The authors used AlexNet architecture for identifying the weld image types during the manufacturing process in industry. The authors used fine-tuning technique in the developed AlexNet CNN architecture for improving the classification accuracy of the detected weld images. Wu et al. (2019) used electromagnetic sensing diffusion method for finding the cracks on the welded images. The properties for weld portions were analyzed in this paper using various electro index parameters. The proposed methods in this article were applied on large number of weld images in open access dataset and these methods were validated by the manual process also. The error rate of this method was low as 1.56% and hence many industries were used this method for welding property analysis process.

Proposed methodologies
In this article, the weld images are classified into different classes using deep learning architecture. The conventional CNN deep learning architecture is modified and the weld images in four different cases are trained using this modified CNN architecture. Then, the same modified CNN architecture is used to classify the source weld image (which is to be tested) into four classes as Excess weld, Good weld, No weld and Undercut weld, respectively. The proposed flow of the weld image classification architecture in both training and testing category are depicted in Fig.2 (a) and Fig.2 (b), respectively.
The data augmentation process is used to increase the source weld image counts to improve the classification performance. This article uses time shifting function which includes both left and right shift, on the input source weld image to obtain high number of weld images in each weld cases. The existing ResNet CNN architecture is applied for weld image classifications, which classifies the weld image into four classes. This existing ResNet CNN architecture is designed with three Convolutional layers, two numbers of pooling layers with activation layer and unique FCNN [10][11]. The Convolutional layer 1 is designed with 32 numbers of Convolutional filters with a size of 7*7 kernels. The Convolutional layer 2 is designed with 64 numbers of Convolutional filters with a size of 7*7 kernels. The Convolutional layer 3 is designed with 128 numbers of Convolutional filters with a size of 7*7 kernels. The source input weld image is multiplied with internal kernel of each Convolutional layer which produces the matrix. The size of this matrix is greater than the size of the source weld image size. Hence, this matrix is passed through the pooling layer in order to reduce the size of this matrix. The response of the pooling function may also contain negative value also. In order to eliminate such negative response, the response of pooling layer is passed through the activation layer. The response from the activation layer is passed through the FCNN which produces the final classification responses (four classes). The FCNN in existing CNN architecture is designed with 3 internal hidden layers and each hidden layer is designed with 15 neurons.
This proposed ResNet CNN architecture is designed with three Convolutional layers, two numbers of pooling layers with activation layer and two numbers of FCNN. The Convolutional layer 1 is designed with 128 numbers of Convolutional filters with a size of 5*5 kernels. The Convolutional layer 2 is designed with 256 numbers of Convolutional filters with a size of 5*5 kernels. The Convolutional layer 3 is designed with 512 numbers of Convolutional filters with a size of 5*5 kernels. The source input weld image is multiplied with internal kernel of each Convolutional layer which produces the matrix. The size of this matrix is greater than the size of the source weld image size. Hence, this matrix is passed through the Maximum pooling layer in order to reduce the size of this matrix. The average pooling layer in existing CNN architecture is replaced by the Average pooling function in the proposed CNN architecture. The response of the pooling function may also contain negative value also. In order to eliminate such negative response, the response of pooling layer is passed through the activation layer. The response from the activation layer is passed through the two numbers of FCNN which produces the final classification responses (four classes). The FCNN in proposed CNN architecture is designed with 15 internal hidden layers and each hidden layer is designed with 20 neurons which obtains high classification efficiency. The morphological activity functional methods(Sheelaet al. 2020) are used on the classified weld images to detect the crack regions. Fig.3 (a) shows the Conventional ResNet CNN architecture and Fig.3 (b) shows the proposed ResNet CNN architecture.

. Results and Discussions
MATLAB R2018 is used in this article to simulate the proposed CNN architecture for automatic weld image classifications. Table 1  The proposed ResNet CNN architecture detects 55 number of excess weld case images and obtains 100% of CA, 34 number of Good case images and obtains 100% of CA, 77 number of No weld case images and obtains 98% of CA and 75 number of Undercut weld case images and obtains 100% of CA. Hence, the average CA of the weld image classification system using conventional ResNet CNN architecture is about 99.5%. Further, the performance of the weld image classification system is analyzed using the following factors.
Where, _ , _ are the detected crack pixels accurately and _ , _ are the in accurately detected crack pixels. Table 2 shows the analysis of weld image classification system using conventional and proposed ResNet CNN algorithm.  Table 3 is the comparisons of CNN architectures on weld image classification system and Table 4 is the performance analysis of proposed crack detection method.  Table 5 is the comparisons of proposed crack detection methods with conventional methods.

Conclusions
This paper develops an automatic weld image classification system using deep learning architecture which modifies the existing CNN architecture in order to achieve high performance. The proposed ResNet CNN architecture obtains 97.73% of Se, 98.62% of Sp, 99.33% of SA and 99.5% of CA in this article. The proposed ResNet CNN architecture detects 55 number of excess weld case images and obtains 100% of CA, 34 number of Good case images and obtains 100% of CA, 77 number of No weld case images and obtains 98% of CA and 75 number of Undercut weld case images and obtains 100% of CA. Hence, the average CA of the weld image classification system using conventional ResNet CNN architecture is about 99.5%. The simulation results are compared with existing methods.