In outlier detection using robust estimation methods, if random errors in a linear model are independent and identically distributed (IID), the model is called homoscedastic. In cases where random errors' variances may vary depending on a parameter, the model is called heteroscedastic. However, random errors in a linear model are a combination of homoscedastic and heteroscedastic random errors. This type of model is called heterogeneous. How does the effectiveness of robust methods change in a contaminated heterogeneous linear model when outliers are small? This question is investigated by simulation using two robust methods in a contaminated linear heterogeneous regression model where homoscedastic and heteroscedastic linear regressions are combined. One simple and two different multiple regressions are employed. When the stochastic model is true, the effectiveness of two robust methods in a contaminated heterogeneous linear regression changes depending on the strength of the heteroscedasticity, the number of outliers, the number of unknowns, and the number of heteroscedastic observations. In addition, the robust methods behave differently against outliers in some cases. If the stochastic model is not true, the effectiveness of the robust methods decreases significantly.