Main Article Content
Data is usually described by a large number of features. Many of these features may be unrelated and redundant for desired data mining application. The presence of many of these unrelated and redundant features in a dataset negatively affects the performance of the machine learning algorithm and also increases the computational complexity. Therefore, reducing the dimension of a dataset is a fundamental task in data mining and machine learning applications. The main objective of this study is to combine the node centrality criterion and differential evolution algorithm to increase the accuracy of feature selection. The proposed method as well as the performance dataset preparation of the proposed method was compared with the most recent and well-known feature selection methods. Different criteria such as classification accuracy, number of selected features, as well as implementation time were used to compare different methods. The comparison results of the different methods were presented in various forms and tables and the results were completely analyzed. From the statistical point of view and using different statistical tests like Friedman different methods were compared with each other. The results showed that the selected evolutionary differential algorithm for clustering, instead of finding all the elements of the cluster centers present in the data set, found only a limited number of DCT coefficients of these centers and then by using the same limited coefficients, cluster centers reconstructed.