Unsupervised CNN model for Sclerosis Detection

: Sclerosis detection using brain magnetic resonant imaging (MRI) im-ages is challenging task. With the promising results for variety of ap-plications in terms of classification accuracy using of deep neural net-work models, one can use such models for sclerosis detection. The fea-tures associated with sclerosis is important factor which is highlighted with contrast lesion in brain MRI images. The sclerosis classification initially can be considered as binary task in which the sclerosis segmentation can be avoided for reduced complexity of the model. The sclerosis lesion show considerable impact on the features extracted us-ing convolution process in convolution neural network models. The images are used to train the convolutional neural network composed of 35 layers for the classification of sclerosis and normal images of brain MRI. The 35 layers are composed of combination of convolutional lay-ers, Maxpooling layers and Upscaling layers. The results are com-pared with VGG16 model and results are found satisfactory and about 92% accuracy is seen for validation set.


Introduction
Presence of sclerosis is detected with the help of locating lesions in brain MRI image. The lesions present in the MRI image are the main factor considered to identify the sclerosis. The semantic segmentation of lessions is the complex task and in case of simply scelerosis detec-tion objective segmentation can be avoided and complexity of pro-cessing can be reduced. Many classical methods have shown the spa-tial and textural features extraction using hand crafted rules for the identification of scelerosis but classification accuracy ultimately depends on neural network classifier convergence capabilityies. Here the point to be noted that, features obtained through hand crafted rule are the main cause that define the convergence capability of the neural network and hence limitation in accuracy levels achievements are seen. On the other hand deep neural networks can be used for both feature extraction and classification purposes and the accuracy of classification can be improved. The convolutional neural network based models such as VGG16, VGG19 are available for the classifica-tion task for the images having single objects and less crwody infor-mation contents. But these models fails to classify the scelerosis im-ages correctly and the ultimate goal of accuracy remains unsatsfaied. When these models are compared with ResNets having 50 or 100 lay-ers in the model then intuitively number of layers in VGG16 can be increased for accuracy improvement. But as the number of layers goes on increasing along with them Maxpooling layers also increase and fi-nal image size during the processing becomes so small further that convergence achievement fails and hence shows the poort perfor-mance. Hence simply increasing the layers in base VGG16 model shows less improvements and lossy outcomes in terms of validation accuracies.
So far, classical approaches have also focused on object seg-mentation and then listing for recognizing sclerosis images. Over these classic methods, convolutional neural networks are also required to be trained specifically with respect to objects in the images.
This work contributes in development of 35 layered convolu-tional neural network (CNN) model for classification of sclerosis im-ages in unsupervised manner. The unsupervised word here is specifi-cally applicable for avoidance of segmentation and not using the ground truth information about coordinates of lesion regions in the sclerosis MRI images.

Related Work
Among every neurological illness, numerous sclerosis (con-tracted as MS) is a condition influencing mind or potentially spinal rope [1]. It might prompt a scope of possible manifestations, for ex-ample, development [2], vision [3], balance [4], and so on Customary conclusion strategies dependent on introducing manifestations. These days, imaging strategy, e.g., magnetic resonant imaging (MRI) proce-dures are ordinarily used to distinguish the foci in white matter of the MS patients, giving a quantitative finding technique. A few treat-ments like interferon β-1a, natalizumab, and alemtuzumab that stifle or balance assorted safe framework capacities have been utilized for quite a long time with the point of altering the illness course. In any case, these medicines have either restricted viability or conceivably genuine unfriendly occasions that forestall first-line use for huge scope. MS determination might be mistaken for other white issue ail-ments, for example, intense cerebral dead tissue [5], intense dispersed encephalomyelitis (ADEM), neuromyelitisoptica (NMO) [6], etc.
Chen (2017) [16] utilized direct relapse classifier (LRC). Chen and Chen (2016) [17] utilized Tikhonov regularization approach. Those cutting edge approaches have gotten promising outcomes. By and by, they are powerless against following issues. Firstly datasets were imbalanced, since MS information are more hard to gather. Sec-ond, the element descriptors in their techniques were usually acquired by manual test. Third, their classifiers were straightforward, and may not extract the convoluted MS model. Our commitment in this investigation for the most part centers around utilizing profound figuring out how to recognize MS from sound control (HC). The convolutional neural organization (CNN) is a hotspot device inside the field of pro-found learning, and it has been effectively applied to numerous scho-lastic fields, e.g., far off detecting division [18], contract default iden-tification [19], pearl grouping [20], cerebral microbleeding [21], and so on For the most part, CNN can be utilized in design acknowledg-ment, recognition, assessment [27], expectation [28], relapse, and so on In their papers, CNN was accounted for to increase preferred exe-cution over common PC vision draws near.
Our technique can help distinguish MS from solid individuals. This can help neuroradiologists to have a coarse finding on a given at-tractive reverberation mind picture. At that point, the neuroradiolo-gists can get an opportunity to second-check the cerebrum picture, and assess the seriousness of the MS infection. Later on research, this subsequent check is relied upon to be actualized consequently by means of CNN either.

Proposed Work
The work is proposed by considering following assumptions. 1. The model should classify the images in sclerosis and normal images.
2. There should be avoidance of segmentation requirement and totally unsupervised approach should be used.
3. Maximum accuracy in classification of images should be the main objective. The proposed work consists of two phases of work. The first phase consist of dataset preprocessing and second phase focuses on model development, implementation and classification. Figure 1 shows the block diagram of the proposed work. Each input image is firstly preprocessed and then passed through con-volution neural network layers for feature extraction and classifi-cation.   figure 2. When image input size is up to 128x128 or less, increment in layers leads to loss of scene details and continues on focusing single object characteristics. This leads to the model type which is similar to VGG16 and increasing layer even in VGG16 for modification shows similar lossy progress. To avoid this loss and keep scenery characteristics paired together while retrieving features of the im-age, it becomes necessary to expand the image using up scaling. Also, up scaling as per routine principle increases the details of the features. In most of the medical image segmentation algorithms such as UNet, up scaling plays important role for detailed feature analysis during segmentation. This principle can be considered to enhance the scene details without concatenation and further inclu-sion of downscaling operations.
In many semantic segmentation-based methods using CNN models, there is use of Up scaling layers to maximize the details of the objects in the image while performing scene recognition. Based on the assumptions highlighted in this work, the segmenta-tion requirement is totally avoided but at the same time to increase the details of the features with respect to the objects, there is in-clusion of up scaling layers in our model which improved the per-formance and avoids lossy structure of the feature sets. Also, it does not require any concatenation or merging of the feature maps as in semantic segmentations.
The proposed model performance is compared with VGG16 mod-el. The VGG16 model is modified by changing classes in last dense layer by setting its value to 2. The model is trained on the same dataset that is used for proposed model training.

Results and analysis
The proposed model is implemented using TensorFlow, Keras tools using python programming. The training is performed with 100 epochs and 10 steps per epoch. The machine configuration used for experimentation is consist of Intel i7 10th gen processor, 32GB DDR4 RAM and 6GB Nvidiagtx 1660ti graphics proces-sor.
The feature maps are obtained by training the proposed architec-ture. The outcomes of layer number 20, 30 and 32 in terms of 4x4 feature maps are shown in figure 3. The loss analysis is shown in Figure 4 for total 1000 steps. Figure 4: Loss rate analysis Accuracy analysis The accuracy analysis is performed for different number of epochs in which 100 epochs show better performance and as minimum level of required for the training. Table 1 and figure 5 show accu-racy analysis. The retrained VGG16 model performance is also evaluated in similar manner.

. Conclusion
This positional paper contributes in terms of unsupervised CNN model for binary classification of large image dataset in terms of sclerosis and normal classes. This model does not require scene segmentation in semantic manner and hence reduced complexity of architecture. The performance of model is up to 89% which better than VGG16 model. Also, maximum details preservation is the main processing requirement while performing max pooling and convolutional operations is the main concluding remark while increasing the number of layers in the deep network model. The results obtained on single dataset experimentation are satisfactory and also shows further path for evaluating more combinations of different datasets with different constraints as matter of preprocessing requirements and further modifications in the model archi-tecture which may constitute use of attention models