The use of Machine Learning models as decision support has brought about an increase in the reliability of these models. Especially in cases where data sharing is not desired due to confidentiality, the imbalance in the number of samples belonging to different demographic groups such as age, gender and race in the classrooms may cause bias in the models trained with these data. In our study, it has been shown that racial bias caused by the imbalance of racial information in classrooms can be reduced with the Federated Learning approach, which allows different data sources to contribute to model training without the need for data sharing. In addition, in the study, the results obtained with Federated Learning were compared with Central Learning and it was examined whether reliable models could be obtained without the need for data sharing. In order to compare the model results, metrics such as Accuracy and F1 score, as well as bias metrics such as Disparate Impact and Statistical Parity, which measure the fairness of the model, were also used. When the results were examined, it was seen that the local bias decreased with Federated Learning. The state of NH, where the greatest improvement was seen, decreased from 6.42 in the Disparate Impact results to 1.66 in Federated Learning, and from -0.33 to -0.12 in the Statistical Parity results.