Currently, Fourier-based, wavelet-based, and Hilbert-based time-frequency techniques have generated considerable interest in classification studies for emotion recognition in human-computer interface investigations. Empirical mode decomposition (EMD), one of the Hilbert-based time-frequency techniques, has been developed as a tool for adaptive signal processing. Additionally, the multi-variate version strongly influences designing the common oscillation structure of a multi-channel signal by utilizing the common instantaneous concepts of frequency and bandwidth. Additionally, electroencephalographic (EEG) signals are strongly preferred for comprehending emotion recognition perspectives in human-machine interactions. This study aims to herald an emotion detection design via EEG signal decomposition using multi-variate empirical mode decomposition (MEMD). For emotion recognition, the SJTU emotion EEG dataset (SEED) is classified using deep learning methods. Convolutional neural networks (AlexNet, DenseNet-201, ResNet-101, and ResNet50) and AutoKeras architectures are selected for image classification. The proposed framework reaches 99% and 100% classification accuracy when transfer learning methods and the AutoKeras method are used, respectively.