Feature Selection (FS) is an important process in the field of machine learning where complex and large-size datasets are available. By extracting unnecessary properties from the datasets, FS reduces the size of datasets and evaluation time of algorithms and also improves the performance of classification algorithms. The main purpose of the FS is achieving a minimal feature subset from the initial features of the given problem dataset where the minimal feature subset should show an acceptable performance in representing the original dataset. In this study, to generate subsets we used simultaneous perturbation stochastic approximation (SPSA), migrating birds optimization and simulated annealing algorithms. Subsets generated by the algorithms are evaluated by using correlation-based FS and performance of the algorithms is measured by using decision tree (C4.5) as a classifier. To our knowledge, SPSA algorithm is applied to the FS problem as a filter approach for the first time. We present the computational experiments conducted on the 15 datasets taken from UCI machine learning repository. Our results show that SPSA algorithm outperforms other algorithms in terms of accuracy values. Another point is that, all algorithms reduce the number of features by more than 50%.