In the field of Machine Learning, there is a common point in almost all methodologies about measuring the importance of features in a model: estimating the value of a collection of them in several situations where different information sources (features) are available. To establish the value of the response feature, these techniques need to know the predictive ability of some features over others. We can distinguish two ways of performing this allocation. The first does not pay attention to the available information of known characteristics, assigning a random allocation value. The other option is to assume that the feasible values for the unknown features have to be any of the values observed in the sample (in the known part of the database), assuming that the values of the known features are correct. Despite its interest, there is a serious problem of overfitting in this approach, in situations in which there is a continuous feature: the values of a continuous feature are not likely to occur in any other, so there is a large loss of randomization (there will surely be an insignificant number of records for each possible value). In this scenario, it is probably unrealistic to assume a perfect estimation. Then, in this paper we propose a new methodology based on fuzzy measures which allows the analysis and consideration of the available information in known features, avoiding the problem of overfitting in the presence of continuous features.