Regression testing of a comprehensive software system is very costly and time-consuming. Test prioritization techniques are used to reduce the time and cost spent in regression tests. Data such as various software quality metrics and past test run results are used when prioritizing test cases in the literature. In this study, a new test prioritization method is proposed based on the number and speed of detected errors. Applying the proposed technique aims to see the errors in the system as soon as possible. In our method, historical test data is analyzed to prioritize test cases. The extent to which the analysis focuses on the recent and distant past is controlled by the parameter. For each test case; prioritization is made based on the number of detected errors and the speed of error detection. We observe its effectiveness by applying our method on regression testing for a developer performance portal software. The experimental study shows that the proposed method is valid, and the results are promising. Our approach improves the error detection rate of test sets. This improvement can be applied even in the most complex situations. With the advancement in the error detection rate, the cost and effort spent on regression tests are also reduced.