A Comparative Analysis of Artificial Neural Network and Deep Learning Techniques in Heart Disease

The use of Computers has made the drastic change in the lifestyle of human beings, by making them dependantto solvemany complex applications which was unable to solve him previously. There are many real time applications which needs the joint interaction of human &machine to obtainthe results in fruitful manner. The Artificial Neural Network which is based on the concept of biological brain plays a major role in this, but still suffers from many limitations. To overcome this, a new approach called Deep Learning, based on same human biological systemslike Artificial Neural Network, came into existence to find the best solution in the field of medical science. The main aim of this papers is to review the comparative analysis ofthese two techniques being used in the detection of Heart Disease.


Introduction [A] Artificial Neural Network:
There are many things for which the computers are highly usable, and can do better than humans, is to calculate square roots or retrieve the web page instantaneously. But, human brain can move one step ahead, when it comes to common sense, inspiration and imagination. Human brain interpret the concept of real-world situations in a way that computers can't do. To address this comparison between the two Artificial Neural Networks (ANN) came into existence in 1950s. Inspired by the structure and functionality of human brain it resembles, ANN is the reason to make computers more human like, by making it to think and solve more like humans.
ANN is an attempt to simulate the network of neurons that make up a human brain, so that the computer will be able to learn things, and make decisions in a human like manner. It is the field of Artificial Intelligence (AI) which imitate the human brain, so that computers can be able to understand things and make decisions in a human-like manner, without being programmed. ANN actually interpret the sensory data through machine perception, labeling and clustering of raw input. It is a set of algorithms that are designed to recognize patterns, which are numerical, contained in vectors, into which all real-world data, of images, sound, text or time series, must be translated.
Since ANN is an efficient computing system whose central theme is based from the analogy of biological nervous system, therefore it is also called -parallel distributed processing systems,‖ or -connectionist systems‖ and hence acquires a large collection of interconnected units in some pattern, to allow communication between them.
These units, referred to as nodes or neurons, are simple processors which operate in parallel and every neuron is connected with other neuron, through a connection link. In it each connection link is associated with a weight that has information about the input signal, which is the most useful information for neurons to solve a particular problem. As the weight usually excites or inhibits the signal being communicated, therefore each neuron has an internal state, called an activation signal. The output signals, which are produced after combining the input signals with activation function is sent to other units.

[i] Working of ANN
TheANN works by using different layers of mathematical processing, to make sense for the information which is feeded. Typically, an ANN has dozens to millions of artificial neurons-called processing units-arranged in a series of layers. The input layer receives information in various forms from the outside world as input. It is this data which the network aims to process or learn from it. From the input unit, the data goes to the next layer called hidden layer. The main job of hidden layer, which can be one or more, is to transform the input into something which the output layer can use.
The majority of ANN are fully connected from one layer to another, and these connections are weighted, which specifies the greater number of significance and higher influence one unit has on another, similar to human brain. As the data goes through each layer, the network is learning more about the data. On the other side of the network is the output units, and this is where the network responds to the data that it was given and processed.

[ii] Model of Artificial Neural Network
The following diagram represents the general model of ANN followed by its processing. (ii) Hidden Layer: The hidden layer presents in-between input and output layers. It performs all the calculations to find hidden features and patterns.
(iii) Output Layer: The input goes through a series of transformations using the hidden layer, which finally results in output that is conveyed using this layer.
The ANN takes input and computes the weighted sum of the inputs, and includes a bias. This computation is represented in the form of a transfer function.
It determines weighted total is passed as an input to an activation function to produce the output. Activation functions choose whether a node should fire or not. Only those who are fired make it to the output layer. There are distinctive activation functions available that can be applied upon the sort of task we are performing.

[iii] Types of ANN
There are two types of network being used by ANN are: -Feedforward and Feedback.

(a) Feedforward ANN
In this type of network, the flow of information is unidirectional. A unit sends information to other unit from which it does not receive any information. As it contains fixed inputs and output, therefore it do not have feedback loops. This type of network is widely used in pattern generation/recognition/classification.

(b) Feedback ANN
In it also, the flow of information is unidirectional, but it supports feedback loops. They are used in content addressable memories.
[iv] Learning Rules of ANN On the basis of learning, the ANNs is being classified as: − (a) Supervised Learning − It involves the use of a teacher which provides the complete information about the network before the ANN itself. For example, in theapplication of pattern recognition, the ANN guesses while recognizing the output patterns, while the teacher knows the answer and provides it to the ANN. The network then compares it, and guesses it with the output being provided by teacher's -correct‖ answers and makes adjustments according to errors.
(b)Unsupervised Learning − It is needed when there is no information for the example data set with known answers. For example, in searching a hidden pattern, clustering i.e. dividing a set of elements into groups according to some unknown pattern, is carried out, based on the present existing data sets.
(c) Reinforcement Learning − This type of learning is built on observation. The ANN makes a decision by observing its environment. If the observation is negative, the network adjusts its weights, to make a required different decision the next time.

[B] Deep Learning
Deep Learning (DL) is a subfield of machine learning (ML). It is entirely based on algorithms which are inspired by the structure and function of ANN, which is based on the functionality of biological brain. DL is the name used for -stacked neural networks‖, i.e., networks composed of more than three layers, including input and output. It comprises of layers, which are made up of nodes where the computation happens, is designed on the human brain which contains neurons, and fires when it encounters sufficient stimuli.
A node combines the input from the data with a set of coefficients or weights, which either amplify or dampen the input, to make the significance to inputs with respect to make the task of learning the algorithm. These inputweighted products are then summed, and passed to the node's activation function, to determine up to what extent the signal should pass through the network to affect the ultimate outcome, i.e. classification. If the signals passes through the neuron and has been -activated‖, then inthis process, these networks learn to recognize correlations between certain relevant features and optimal results i.e. to draw the connections between feature signals and their representation, that itis full reconstruction, or with labelled data.
A DL network trained on labelled data can then be applied to unstructured data, giving it access to many input than ML networks. It is therefore an approach for getting higher performance, i.e. more data a network can train, and the more accurate it is. Therefore, the ability of DL to process and learn from the huge quantities of unlabelled data defines more advantages over the previous algorithms.
A DL network is different from the single hidden layer ANN by the number of hidden layers by which the data passes in a multistep process of pattern recognition. Previously, the neural networks such as the first perceptron were shallow, and comprises of only one input and output layer, and only one hidden layer in between but DL Comprises of more than one hidden layer strictly and hence, each layer of its nodes trains on a different set of features, based on output of previous layer. The more you get close into the ANN, the more complex features the nodes can recognize, since they aggregate and recombine features from the previous layer.
This concept is known as feature hierarchy, and it is a hierarchy needed to increase the complexity and abstraction which makes DL networks capable of handling very large, high-dimensional data sets having billions of parameters which pass through nonlinear functions. Besides these,the ANN is capable of finding latent structures within unlabelled and unstructured data, which is vast majority of data in the world.
Another form of unstructured data is raw data comprising pictures, texts, video and audio recordings which Therefore, one of the problems DL solves best is in processing and clustering the raw, unlabelled media, withsimilarities and anomalies in data that no human has organized in a relational database. For example, DL can take a million of images, and cluster them according to their similarities with cats in one corner, ice breakers in another, and all the photos of your grandmother in the third. DL can be used to perform automatic feature extraction without the use of human intervention, unlike the most traditional ML algorithms. As feature extraction is a task which the teams of data scientists' take years to accomplish, DL is a way to circumvent the chokepoint of limited experts. It augments the powers of small data science teams, which do not scale by their nature. Therefore, when training on unlabelled data, each layer in a DL learns its features automatically by repeatedly trying to reconstruct the input from which it draws its samples, trying to minimize the difference between the predicted network and the probability distribution of the input data.
The restricted Boltzmann machines create the reconstructions in this manner and in these the ANN learn to recognize the correlations between the relevant features and optimal results i.e. they draw connections between feature signals and what those features represent, whether it be a full reconstruction, or with labelled data.
DL end in an output layer of a logistic, or softmax with classifier that assigns a particular outcome or label, which is called as predictive in a broad sense. The DLis the most exciting and powerful branch of ML. It is a technique that teaches the computers to do what comes naturally to humans i.e. learn by example and hence becomes the key technology behind various application of today's era.
The basic reason of DLgetting attention is due to the reason of achieving results that were not possible before. In it, a computer model learns to perform classification tasks directly from images, text, or sound and can achieve accuracy, exceeding the performance of humans. Models are trained by using a large set of labelled data and by the use of network architectures that contain many layers.
The DL models can be used for a variety of complex tasks using:

2.Literature Survey
As extensive amount of information is available for clinical specialists, ranging from theclinical symptoms to the various types of biochemical data, and outputs of imaging devices. Since each type of data provides information that must be evaluated, and assigned to a particular pathology during the diagnostic process, in daily routine and for avoiding misdiagnosis, an efficient technique of AI, by employing the computer aided diagnosis of ANN is utilized.
These adaptive learning algorithms can handle diverse types of medical data, and integrate them into categorized outputs. The researcher Amato,Lopez,Maria etc. [1] in his work, concluded that, ANNs represents a powerful tool to help the physicians perform diagnosis, and hence gives many advantages including: (i) The ability to process large amount of data (ii) Reducing thechances to overlook relevant information (iii) and, by Reducing the time of diagnosis.
ANNs have proven to be the suitable for satisfactory diagnosis of various diseases and, its use can make the diagnosis more reliable and increases the satisfaction of patient.
As ANN finds many uses in the application of medical diagnosis, the goal is to evaluate the diagnosis of disease. Another Researcher [2] studied two cases: first acute nephritis disease, whose data is the symptoms of disease, while second one is the heart disease, and the data is obtained on cardiac Single Proton Emission Computed Tomography (SPECT) images. For this study, they classified each patient into two categories: infected and non-infected.
As classification is an important tool in medical diagnosis for decision support, and feed-forward back propagation neural network is used as a classifier, to distinguish between the infected or non-infected persons, the main aim of their study is to evaluate ANN in disease diagnosis, and obtained the results with data presented in the form of symptoms and images.
In the first case, the correct percentage of classified in the sample of simulation by the feed-forward back propagation network is very large, while in the second case, the correct percentage is some less. On the basis of results they concluded that proposed diagnosis using ANN is useful for the identification of infected person.
As Physicians have big opportunity and responsibility to track the continuous developments, made by AI, to apply them according to their needs in order to find the tools for the clinical practices. The use of AI in the cardiovascular disease (CVD), despite the significant advances in the diagnosis and treatments, represents the leading cause of morbidity and mortality inthe world. AI techniques, such as ML and DL, can also improve medical knowledge, due to the increase in the volume and complexity of the data.
In order to improve and optimize CVD outcomes, AI techniques have the potential, to change the way to practice cardiology, in imaging, and hence, offers novel tools to interpret data, and make clinical decisions. Another researcher [3] in his studies reviews the use of AI and digital health applied to cardiovascular medicine, and concluded that the use of AI provides physicians a potential of cardiac imaging in clinical practice.
As Classification is the most recent and researched topic of ANN and the literature is vast and growing rapidly,another researcher [4]reviews the important issues for the developments of ANN in classification problem, by including the posterior probability estimation, which is the link between neural and conventional classifiers, between learning and generalization in ANN classification, and hence needed to improve the performance of neural classifier. Since the efforts of research during the last decade have made significant progresses, in both theoretical and practical applications, the purpose of this study is to provide an overview of the previous research in this area, and the efforts made up till now. It is hence found, that the ANN demonstrates to be an alternative approach, to the traditional classifiers, for many practical classification problems. They concluded thatfor making it a tool for decision making, ANN needs to be systematically evaluated, and compared with the other new and traditional classifiers.
Another researcher [5] performs a surveyof ANN, that how it can be applied to address the needs of human in the applications of realworld. In their study they also presented the application challenges, contributions, comparative performances and critical methods in various disciplines of computing, science, engineering, medicine, environmental, etc. They also found that ANN models, such as feedforward and feedback propagation had better performance in the problems of humans, as the data analysis has added accuracy, processing speed, fault tolerance, latency, performance, volume, and scalability.
During their work they also studies that inspite of applying a single method, ANN models such as Feed Forward Back Propagation (FFBP), and hybrid model has better performanceand efficiency for the implementation of human problems. They concluded that ANN is a new computational model with large uses for handling various complex real world issues, and therefore the focus of research should be on combining ANN into one network-wide application, due to the information processing characteristics to learning power, high parallelism, fault tolerance, nonlinearity, noise tolerance, and capabilities of generalization.
As ANN have played very key role in the past years, for making many important developments in medicine and health, it still contain many future challenges. Another researcher [6] presents an overview of using ANN as a tool, for processing biomedical signals, for a variety of applications, in areas of medical fields like cardiology, gynaecology, or neuromuscular control, and observed that developments are found using hybrid systems. It is also found that ANN can support the diagnosis and prediction of diseases, andhad emerged by the combination of AI tools with improved performance, in medical diagnosis and rehabilitative prediction. They concluded that the practical application of using ANN assumes much experience with respect to the chosen type of network for data normalization, or for network parameters, in order to achieve results that give physicians the best optimal decision.
Another researcher [7] in his work used ANN in medicine for disease diagnostics systems. Since, ANN are not able to recognize only examples, but able to maintain very important information, therefore, one of the main areas of application of ANN is the interpretation of medical data. In their work, they used training and testing of database, consisting of data on many patients. The network was trained over many examples, and the reliability of ANN is less high. It is observed that the theoretical difficulty, workload and time lost for network modelling, were compensated by the simplicity of model, and if the task of solving the problem, and optimal training is done by specialists, then the practical application of the solution requires only basic knowledge of computer. They concluded that the difficulty of interpreting the training system for users, deemed unnecessary simple, if it is not only the way of functioning the neural network, but the result obtained by the information, accuracy and operation speed.
The aim of Artificial Intelligence is to mimic human cognitive functions, and it is bringing a paradigm shift to healthcare, by increasing availability of healthcare data, and providing rapid progress of analytics techniques. Another Researcher [8] studies the current status and future of AI, and its applications in healthcare. As AI can be applied to various types of healthcare data (structured and unstructured), and includes machine learning methods for structured data, such as the classical support vector machine and neural network, while modern deep learning, as well as natural language processing for unstructured data. The sophisticated algorithms then needs to be trained through healthcare data, before the system can assist physicians in the diagnosis of disease, and suggest the treatments. Major disease areas that use AI tools include cancer, neurology and cardiology. In their work they surveyed using the three major categories, Machine Learning, Natural Language Processing and Deep Learning, of AI applications in stroke care. Stroke is a chronic disease with acute events, and management of Stroke is a complicated process with a series of clinical decision points. Traditionally, clinical research solely focused on a single, or very limited clinical questions, while ignoring the continuous nature of stroke management. Taking the advantage of large amount of data with rich information, AI is expected to provide with studying much more complicated, yet much closer to real-life clinical questions, which can lead to better decision making in stroke management.
Although the AI is attracting substantial attentions in medical research, the real-life implementation using it is still facing obstacles. The first hurdle being from the regulations, as current regulations have lack of standards to assess the safety and efficiency of AI systems, and the second hurdle is the exchange of data. In order to work well, AI systems need to be trained (continuously) by data from clinical studies. However, once an AI system gets deployed after initial training with historical data, continuation of the data supply becomes a crucial issue for further development, and improvement of the system. They concluded that the use of AI applications in stroke, in the three major areas of early detection and diagnosis, treatment, as well as outcome prediction and prognosis evaluation is a better approach.
Since the use of Artificial Intelligence in healthcare has showed a variety of information, that has been examined and reviewed on important types of diseases. Hence, AI has the potential of detecting significant interactions in a dataset, and can also be widely used in several clinical conditions to expect the results, treat and in the diagnosis of disease. In AI, Machine Learning (ML) and Natural Language Processing, are the two major groups, and for Machine Learning the ANN and SVM are the two most acceptable methods. Another Researcher [9] studies that AI must have ML methods, that helps for conducting the structured data such as EP data, images, and genetic data, and natural language processing (NLP) for the deduction of unstructured works. The algorithms of it, are used in healthcare to support the physicians for the analysis of disease, and selecting plans which should be required for treatment. This technique focuses on how computer-oriented assessment methods, within the AI, can help in improving health and clinical area, and using ANN presents a revolutionary progress and enthusiasm in several fields of healthcare science, drug analysis, and in public health. During their study, they found that using deep neural networks can also execute as well, the most excellent human clinicians' and indefinite diagnostic responsibilities. They concluded that data may be in the form of MRI or X-ray or ultrasound pictures of the patient, visual records of lung or heart function, or verbal similes of the patient, and as seen by the medical personal, when data are accumulated in information process and applied in health research for the expansion of treatment procedures, it is concentrated to statistical information, that is digital, i.e. the digital output obtained by using the ANN, which is an oppressive task, and results in information, which would have been beneficial to the consumer.
As ANNs is nonlinear regression computational device that is being used for more than four decades in the classification, survival & prediction of several biomedical systems, including the colon cancer. Another Researcher [10] studies the use of three-layer feed forward ANN (FFANN) with backpropagation error used in biomedical fields, and a methodological approach to its application for cancer research, as exemplified by colon cancer. There are many advantages to FFANNs when applied to biomedical decision-making which include: (a) requirement for less formal statistical training to develop, (b) having a better discriminating power other than regression models, (c) can be developed using multiple different training algorithms, (d) their parallel nature enable them to accept a certain amount of inaccurate data without a serious effect on predictive accuracy (i.e., graceful degradation), (e) having the ability to accurately detect complex nonlinear relationships between independent and dependent variables, and all possible interactions between variables, as they make no assumptions about those variables, (f) reduce the number of false positives without significantly increasing the number of false negatives, and (g) they may allow for individual case prediction. On the other hand, disadvantages include: (a) considered as "black box" methods, one cannot exactly understand what interactions are being modeled in their hidden layers as compared to "white box" statistical models, (b) have limited abilities to identify possible causal relationships, (c) model development is empirical; thus, providing low decision insight, and many methodological issues remain to be solved, (d) models prone to overfitting, (e) require lengthy development and time to optimize, (f) they are more difficult to use in the field because of computational requirements, and (g) there is conflicting evidence whether or not they are better than traditional regression statistical models for either data classification, or for predicting outcome. Despite their theoretical advantages, ANNs do not universally outperform standard regression techniques for several reasons: (a) because from a practical point of view, only a limited amount of data that may be related to the outcome of interest can be collected, and these data are mostly based on studies in which a standard regression model was used, and therefore only factors that were significant in a regression models are collected in subsequent studies. Therefore, nonlinear functions, or those that involve interaction with other variables may not have emerged as "significant" in the regression analysis and therefore are not reflected in the literature as important prognostic factors, (b) all variables and outcomes are measured with error(s). A nonlinear relation when measured with an error may well be adequately represented by a linear model, (c) there exist data barriers beyond which mathematical models are unable to make predictions in biological systems, and (d) regression models are superior to ANNs when drawing inferences and interpretations based on outputs. In addition to insight into the disease process, regression models provide explicit information regarding the relative importance of each independent variable. This information can be valuable in planning subsequent interventions, in eliminating unnecessary tests or procedures that are unrelated to the outcome of interest, and in determining which are the most critical data to store in the database. It is concluded that using the results of ANNs is a good, as this will improve the confidence in the reliability of data reported in the scientific literature by using an unsupervised method such as ANN for data analysis.
As ANN are being extensively used in many application areas due to their ability to learn and generalize from data, and similarity to a human reaction, hence it can be used as a classifier, dynamic model, and diagnosis tool. Another Researcher [11]in his work studies blood flow emboli classification based on transcranial ultrasound signals, tissue temperature modelling based on imaging transducer's raw data and identification of ischemic cerebral vascular accident areas based on computer tomography images. They studied several biomedical cases and how they have been dealt to achieve parameters' estimation using NNs in particular RBFNNs. From classification of emboli based on transcranial Doppler ultrasound blood flow signals, to the design of a diagnostic tool for ICVA identification, and finally the temperature estimation on ultrasound induced hyperthermia making use of simple to more realistic phantoms, they described the methods used and how data was used as inputs of the NNs. They concluded that in all the cases the performance of ANN proves to produce very accurate results, and encourages the more use of computational intelligent techniques on medical applications.
Neural networks are currently a hot research area in Medicine and Medical Image Recognition algorithms have been widely used more accurately to help the diagnosis of various diseases. Another Researcher [12] in his work proposed an approach based on image processing and neural network using feed forward neural network trained by the error back-propagation algorithm that allowed its use to classify heart valve diseases. The goal of this study is to implement image processing techniques by extracting texture features from medical echocardiography images, combining intensity histogram features and Gray Level Co-occurrence Matrix (GLCM) features, and then developing an artificial neural network for automatic classification based on back-propagation algorithm, to classify the heart valve diseases more accurately. The proposed method and its performance was evaluated in terms of precision, recall and accuracy and the results demonstrate the effectiveness of the algorithm. On the basis of experimental results which gives confirmation of the efficiency of method, which provides good classification, they concluded that the proposed method help the beginner physicians and other professionals who work in the field of heart diseases, to classify heart valve diseases and give diagnosis of it with high accuracy and efficiency.
Unlike these systems, humans learn to perceive patterns by directing attention to relevant parts of the available data. In recent years, deep artificial neural networks (including recurrent ones) had won many contests in pattern recognition and machine learning and in near future it will do so by extending previous work since 1990 on NNs. Another Researcher [13] surveys the shallow and DL methods, and distinguish it by the depth of credit assignment paths, which are chains of possible learning and causal links between actions and effects. As Brain seem to minimize such computational costs during problem solving in two ways: (1)as at a given time, only a small fraction of all neurons are active since local competition through winner-take-all mechanisms shuts down many neighbouring neurons, and only winners can activate other neurons through outgoing connections (2) and Numerous neurons are sparsely connected in a compact 3D volume by many short-range and few long-range connections (much like microchips in traditional supercomputers).Also many neighbouring neurons are allocated to solve a single task, thus reducing communication costs and many future deep NNs will consider that it costs energy to activate neurons and to send signals between them. They used deep supervised learning using the concept of backpropagation, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. They found that most successful current deep RNN are not like this, and concluded that in near future it may belong to general purpose learning algorithms which improve themselves in optimal ways.
As DL approaches are practical to solve many problems and in recent years, it has achieved great success in many fields such as computer vision and natural language processing. Another researcher [14] in his studies introduced deep learning models and its framework in details and found that as compared to traditional machine learning methods, deep learning has strong learning ability and can make better use of datasets for feature extraction. They also found that the abilities of unsupervised learning by using it will be enhanced, as there are millions of data in the world and it is not applicable to add labels to all of them. It is also predicted that neural network structures will become more complex so that they can extract, more semantically meaningful features and it can use reinforcement learning better. They concluded that due to its practability, deep learning becomes more and more popular for many researchers to do research works. In this paper, we mainly introduce some advanced neural network of deep learning, and their applications. Besides we also discuss limitations and prospects of deep learning.
As DL is offering promise for medical diagnostics, another researcher [15] aimed to evaluate the diagnostic accuracy of deep learning algorithm with health-care professionals in classifying diseases using medical imaging. They found that diagnostic performance of deep learning models is equivalent to that of health-care professionals. However, a major finding of the work is that some studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Also, poor reporting in deep learning, limits the reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising.
Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology.
Since the implementation of clinical-decision support algorithms for medical imaging faces challenges with reliability and interpretability, another researcher [16] established a diagnostic tool based on a deep-learning for the screening of patients with common treatable blinding retinal diseases. Their framework uses transfer learning, which trains a neural network with small data of conventional approaches. By employing a transfer learning algorithm, model described competitive performance of OCT image analysis without the need for a highly specialized deep learning and without a database of millions of example images. The performance of their model depends highly on the weights of the pre-trained model, and applying this approach to a dataset of OCR images, they showed comparable performance to human experts in classifying age related macular degeneration and diabetic macular edema. During their study they also provide a more transparent and interpretable diagnosis by highlighting the regions recognized by the neural network. They further showed the general applicability of AI system for diagnosis of pediatric pneumonia using chest X-ray images and found that this tool may be used in expediting the diagnosis and referral of these treatable conditions, thereby facilitating earlier treatment, resulting in improved clinical outcomes. By demonstrating efficiency with multiple imaging modalities and with a wide range of pathology, they concluded that this transfer learning framework presents a compelling system for further exploration, and analysis in biomedical imaging, and more generalized application to an automated communitybased AI system for the diagnosis and triage of common human diseases Another researcher [17] showed that in recent years a surge of interests in data analytics with patient Electronic Health Records (EHR) is being observed. The Data-driven healthcare, which aims at effective utilization of big medical data, represents the collective learning in treating millions of patients, to provide the best and most personalized care, is believed to be one of the most promising directions for transforming healthcare. Thus effective feature extraction, or phenotyping from patient EHRs is a key step before any further applications. In their study they proposed a deep learning approach for phenotyping from patient EHRs. Their framework was composed of four layers: input layer, one-side convolution layer, max-pooling layer and softmax prediction layer and then uses a four layer convolutional neural network model for extracting phenotypes and predicting performance. They also investigate different temporal fusion mechanisms to explore temporal smoothness of patient EHRs and finally conclude the effectiveness of the model on both synthetic and real world data quantitatively and qualitatively.
Another researcher [18] in this study, have shown that a flexible Heart Transplantation Survival Algorithm (IHTSA), developed using deep learning technique, and Index for Mortality Prediction after Cardiac Transplantation (IMPACT), to predict survival after heart transplantation exhibits better discrimination and accuracy than for predicting one-year mortality. They made the results of this model in the form of a web-based batch calculator that could be used as a virtual recipient-donor matching tool, which is a first step in the implementation of a deep learning architecture for the transplantation data, and concluded that it will pave the way for further improvements and an even more accurate model.
As cardiac arrest is a major burden to public health, which affects patient safety and although traditional track and-trigger systems are used to predict cardiac arrest early too have limitations, with low sensitivity and high false-alarm rates. Another researcher [19] proposed a deep learning based early warning system that shows higher performance than the existing track-and-trigger systems. They concluded that an algorithm based on deep learning had high sensitivity and a low false-alarm rate for detection of patients with cardiac arrest in the multicenter study and found that consequently it is easy to apply it in various hospital environments, and offers potentially greater accuracy by using additional information.
Another researcher [20] in their study developed an enhanced deep neural network (DNN) to increase the accuracy and reliability of heart disease diagnosis and prognosis in patients. The developed DNN learning model was based on a deeper multilayer perceptron architecture with regularization and dropout using deep learning. It includes a classification model based on training data and a prediction model for diagnosing new patient using a data set .The testing results showed that the DNN classification and prediction model achieved the diagnostic accuracy of 83.67%, sensitivity of 93.51%, specificity of 72.86%, precision of 79.12%, F-Score of 0.8571, area under the ROC curve of 0.8922, Kolmogorov-Smirnov (K-S) test of 66.62%, diagnostic odds ratio (DOR) of 38.65, and 95% confidence interval for the DOR test of [38.65, 110.28]. Therefore, clinical diagnoses of coronary heart disease were reliably and accurately derived from the developed DNN classification and prediction models. They concluded that the models can be used to help healthcare professionals and patients throughout the world to advance both public health and global health, especially in developing countries and with fewer cardiac specialists available.
They also concluded that other enhanced methods would also raise the diagnostic accuracy of the deep learning model by utilizing it for diagnosis of heart disease in patients worldwide and current advances in deep learning, including recurrent neural network, deep convolutional neural network, long short-term memory neural network, deep brief network based on restricted Boltzmann machines, and deep auto-encoder may further increase the accuracy of heart disease for patients.
Deep learning has gained a central position in recent years in machine learning and pattern recognition. Deep learning, is a technique with its foundation in ANN, is emerging in recent years as a powerful tool for machine learning, promising to reshape the future of AI. Another Researcher [21] in his studies had showed how deep learning has made the development of more data-driven solutions in health informatics, by allowing automatic generation of features that reduce the amount of human intervention in this process. Until now, most applications of deep learning to health informatics have involved processing health data as an unstructured source. A significant amount of information is equally encoded in structured data such as EHRs, which provide a detailed picture of the patient's history, pathology, treatment, diagnosis, outcome, and the like. In fact, robust inference through deep learning combined with artificial intelligence could make the reliability of clinical decision support systems. Deep learning algorithms have mostly been employed in applications where the datasets were balanced, or, as a work-around, in which synthetic data was added to achieve equity. The later solution entails a further issue as regards the reliance of the fabricated biological data samples. Therefore, methodological aspects of NNs need to be revisited in this regard. Another concern is that deep learning predominantly depends on large amounts of training data. Such requirements make more critical the classical entry barriers of machine learning, i.e., data availability and privacy. Reference to the issue of computational power, they envisage that for the years to come, further ad hoc hardware platforms for neural networks and deep learning processing will be announced and made commercially available. Therefore, they conclude that deep learning has provided a positive revival of NNs and connectionism from the genuine integration of the latest advances in parallel processing enabled by coprocessors. Nevertheless, a sustained concentration of health informatics research exclusively around deep learning could slow down the development of new machine learning algorithms with a more conscious use of computational resources and interpretability. AsCardiovascular diseases or Congestive Heart Failure is one of the leading causes of deaths all over the world and it accounts for million people's deaths on every year. Since this is an alarming situation and something needs to be done to check the spreading of disease for boosting the quality of life. Another Researcher [22] proposed and developed a useful system which will be able to help the doctors in evaluating the medical condition of patient and be able to detect if a patient is prone to heart failure or not. It is also be able to predict the type of heart failure and the severity of it. Although traditional machine learning methods have been implemented previously, this is an important effort in the direction of a prior revelation of the disease, which help in reducing the number of deaths. For the detection of heart failure, they used a boosted decision tree and the CNN module which gives an estimation of the patients being prone to heart failure. The SVM algorithm is used for detection of type of heart failure, and to measure the severity of heart failure, an artificial neural network is used and concluded with an accuracy of 84% and 88.30 % respectively. In this study their main focus was on the detection of accuracy of CHF, with the major emphasis on prevention of disease.
Cardiovascular imaging technologies continue to increase in their capacity to capture and store large quantities of data. Modern computational methods, developed in the field of machine learning, offer new approaches to leveraging the growing volume of imaging data available for analyses. Machine learning methods can now address data-related problems ranging from simple analytic queries of existing measurement data to the more complex challenges involved in analyzing raw images. To date, machine learning has been used in 2 broad and highly interconnected areas: automation of tasks that might otherwise be performed by a human and generation of clinically important new knowledge. Most cardiovascular imaging studies have focused on task-oriented problems, but more studies involving algorithms aimed at generating new clinical insights are emerging. Continued expansion in the size and dimensionality of cardiovascular imaging databases is driving strong interest in applying powerful deep learning methods, in particular, to analyze these data. Overall, the most effective approaches will require an investment in the resources needed to appropriately prepare such large data sets for analyses. Notwithstanding current technical and logistical challenges, machine learning and especially deep learning methods have much to offer and will substantially impact the future practice and science of cardiovascular imaging.
As Cardiovascular imaging technologies continue to increase in their capacity, to capture and store large quantities of data, many computational methods developed in the field of ML provides new approaches to leverage the growing volume of imaging data available for analyses. Since Machine learning approaches have formed the core of cardiovascular image acquisition and processing algorithms that are already in routine use. With the rapid evolution of ML advancements are being made in developing tools for optimizing performance of cardiovascular imaging measurements and also how the results of these measurements are interpreted. Another researcher [23] in his study finds that ML can be used in two broad and highly interconnected areas of (i) automation of tasks that are being performed by human and (ii) generation of clinically important new knowledge. Currently available ML methods, those based on deep learning, have generated growing interest in their potential to derive new insights from image-related data, as well as the images themselves, given the expanding size of existing databases. Increasingly large databases, however, will require increasing resources to create high-quality labels for enabling effective analyses. Therefore, continued progress will depend on a commitment to thoughtfully and strategically investing in such resources. Notwithstanding ongoing technical and logistical challenges facing the field, machine learning and particularly deep learning methods are very likely to substantially impact the future practice and science of cardiovascular imaging.
As Heart disease is one of the major complexities of life, which can ultimately lead to death, if not diagnosed and treated properly. In many developing countries, due to non-availability of efficient diagnostic tools, and shortage of medical professionals, it affect the proper prediction and treatment of patients. Although, large proportion of heart diseases are preventable but they rise due to inadequate preventive measures. In today's digital world, several clinical decision support systems on heart disease prediction have been developed by different scholars to simplify and ensure efficient diagnosis. Another Researcher [24] investigates the various clinical decision support systems for prediction of heart disease, proposed by various researchers using data mining and machine learning techniques. In his work they used Classification algorithms such as the Naïve Bayes (NB), Decision Tree (DT), and Artificial Neural Network (ANN) to predict it and obtained various accuracies. They concluded that only a marginal success is achieved in the creation of such predictive models for heart disease patients therefore, there is need for more complex models that incorporate multiple geographically diverse data sources to increase the accuracy of predicting the early onset of the disease.
Since Machine learning involves AI, and it is used in solving many problems in data science. One common application of machine learning is the prediction of an outcome based upon existing data. The machine learns patterns from the existing dataset, and then applies them to an unknown dataset in order to predict the outcome. Classification is a powerful machine learning technique that is commonly used for prediction. Some classification algorithms predict with satisfactory accuracy, whereas others exhibit a limited accuracy. Another Researcher [25] in his work investigates a method called ensemble classification, which is used for improving the accuracy of weak algorithms, by combining multiple classifiers. Experiments with this tool were performed using a heart disease dataset. A comparative analytical approach was done to determine how the ensemble technique can be applied for improving prediction accuracy in heart disease. It is used with the emphasis on increasing the accuracy of weak classification algorithms, and also on the implementation of the algorithm with a medical dataset, to show its utility to predict disease at an early stage. They concluded that ensemble techniques, such as bagging and boosting, are effective in improving the prediction accuracy of weak classifiers, and shows satisfactory performance in identifying risk of heart disease. A maximum increase of 7% accuracy for weak classifiers was achieved with the help of ensemble classification.

Conclusion
The present work clearly analyzes both of Machine Learning Techniques. The Artificial Neural Network used as a technique shows best performance for the prediction of Heart Disease, but had its own limitation. The Deep Learning, on the other hand, although a recent approach shows great accuracy and results, in comparison to the ANN. Hence it is concluded that Deep Learning emerges as a recent and more précised tool to predict and analyze more precisely. Due to its better performance, it is widely been accepted by medical professionals to cure patients associated with it and can be used in future research for it.