IEEE Access, 2026 (SCI-Expanded, Scopus)
The aim of explainable artificial intelligence (XAI) is to address the black-box problem in high-stakes applications. However, transparency alone does not guarantee trust. This review examines a critical paradox in XAI research. While explanation methods can generate insights, three main challenges limit their effectiveness. Firstly, adversarial manipulations can exploit explanations by creating new attack surfaces with over ninety percent success while preserving model accuracy. Secondly, evaluation practices remain primarily computational. Only twenty-six percent of user studies follow human-centered protocols and fewer than twenty-three percent involve domain experts. Thirdly, regulatory requirements, such as the GDPR right to explanation, lack clear technical implementations, complicating compliance. We analyzed the literature across finance, healthcare, and cybersecurity and found that current research emphasizes algorithmic innovation over practical deployment. Moving toward reliable AI requires shifting from simple explanation methods (XAI 1.0) to systems that are aligned with human understanding, resistant to adversarial attacks, and compliant with legal requirements (XAI 2.0). This review provides guidance on key technical advances, evaluation strategies and regulatory clarifications necessary for deployment. trustworthy AI.