A CONSISTENCY-AWARE PERSPECTIVE ON EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR FEATURE SELECTION IN SOFTWARE ENGINEERING: A CRITICAL REVIEW AND FRAMEWORK

Authors

  • Adam Khan Department of Computer Science, Sarhad University of Science and Information Technology Peshawar, Pakistan
  • Asad Ali Computer Engineering Department, Cyprus International University, Nicosia, North Cyprus
  • Muhammad Ismail Mohmand Department of Computer Engineering at the Faculty of Engineering,and Natural Sciences at Istanbul Atlas University, 34408, Turkey.

Abstract

Explainable Artificial Intelligence (XAI) is become essential to enhance transparency, interpretability and trust in Machine Learning (ML) models in Software Engineering (SE). Although model-agnostic approaches such as Local Interpretable Model-Agnostic Explanations (LIME) or SHapley Additive exPlanations (SHAP) and Permutation Feature Importance (PFI) are increasingly popular for prediction interpretation, their effectiveness in assessing Feature Selection (FS) is an issue of serious concern. Specifically, the ranking of feature importance produced by these methods is often unstable across changes in datasets, model configurations, and validation techniques, and has less practical application in SE decision-making. This study presents a critical and thematic review of XAI methods for FS in SE , with particular emphasis on the explanation consistency. Unlike prior studies, it methodologically examines the shortcomings of current methods in terms of consistency. On the basis of the identified research gaps, we propose the CFXAI-SE framework (Consistent Feature eXplainable AI for Software Engineering). The framework combines dataset perturbation, multi-model analysis and statistical consistency analysis to produce a consistent and reliable feature importance ranking. The findings reveal that consistency is largely unexplored aspect in XAI studies for SE. The proposed framework suggested a systematic context in building reliable, interpretable, and reproducible ML systems. This study contributes to advancing dependable XAI implementation in SE applications, such as defect prediction and effort estimation.

Downloads

Published

2026-03-18

How to Cite

Khan, A. ., Asad Ali, & Muhammad Ismail Mohmand. (2026). A CONSISTENCY-AWARE PERSPECTIVE ON EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR FEATURE SELECTION IN SOFTWARE ENGINEERING: A CRITICAL REVIEW AND FRAMEWORK. Spectrum of Engineering Sciences, 4(3), 1033–1040. Retrieved from https://www.thesesjournal.com/index.php/1/article/view/2287