Smart environments with ubiquitous computers are the next generation of information technology, which requires improved human computer interfaces. That is, the computer of the future must be aware of the people in its environment; it must know their identities and must understand their moods. Despite the great effort made in the past decades, the development of a system capable of automatic facial emotion recognition is still rather difficult. In this paper, we challenge the benchmark algorithm on emotion classification of the Extended Cohn-Kanade (CK+) database, and we present a facial component-based system for emotion classification, which beats the given benchmark performance: using a 2D emotional face, we searched for highly discriminative areas, we classified them independently, and we fused all results together to allow for facial emotion recognition. The use of the sparse-representation-based classifier allows for the automatic selection of the two most successful blocks and obtains the best results by beating the given benchmark performance by six percentage points. Finally, using the most promising algorithms for facial analysis, we created equivalent facial component-based systems and we made a fair comparison among them.