Agent-based modeling is an effective way of understanding and analyzing complex adaptive phenomena. In this respect, discovering the relationship between inputs and outputs of agent-based models is the ultimate way of providing insights into understanding the dynamics of the system being modeled. Therefore, there are many approaches in the literature to clarify these relationships including sampling and metamodeling. Emphasizing the weaknesses and disadvantages of current methods, we present a metamodel-guided sequential sampling technique which combines random forests and uncertainty sampling. Experimental results on two wellknown agent-based models show that the presented technique yields metamodels of higher accuracy compared to metamodels trained with randomly selected input-output data. Contrary to the previous studies emphasizing only the improvement in the metamodel accuracy, we also focus on input parameter combinations selected by the sequential sampling technique, and we observe that sequential sampling is able to capture the boundaries of tipping point behaviors as well as the points exhibiting counter-intuitive behavior, thus, potentially aiding verification, validation, and understanding of agent-based models. Additionally, we propose a novel two-step method for the categorization of the agent-based model outputs prior to metamodel training, which helps the analyst to decide on whether preserving numerical model outputs or continuing the metamodel training procedure with qualitative categorical agent-based model outputs.