Outliers in geodetic networks badly affect all parameters and their variances estimated by least-squares. Tests for outliers (e. g. Baarda's and Pope's tests) are frequently used to detect outliers in geodetic networks. To measure the ability of these tests, the mean success rate (MSR) is proposed. Studies have shown that the MSRs of these tests in geodetic networks are low due to the smearing effect of the least-squares estimation even if there is only one outlier in the data set. In this paper, a new approach, for small outliers, is presented to increase the MSRs of the tests for outliers in geodetic networks. The main idea is that if the weight of one observation is increased, the corresponding studentized or normalized residuals are increased, too. This thesis is proved. Hence, the ability of the tests to detect outliers can be increased by appropriately increasing the weight of one observation at a time and repeating this for all observations. This approach is applied to three simulated geodetic networks. We show that the MSRs of the outlier tests are improved by approximately 5% if there is one small outlier in the data set. However, the improvements in the MSRs for more than one outlier are low.