COMPARATIVE ANALYSIS OF EXPLAINABLE AI TECHNIQUES FOR ENHANCED DECISION SUPPORT SYSTEMS
Abstract
The rapid integration of artificial intelligence (AI) into decision support systems (DSS) has raised concerns about the transparency and interpretability of complex machine learning models. To improve the interpretability and the reliability of AI-driven decision-making, the current paper assesses the popular explainable artificial intelligence (XAI) algorithms, including LIME, SHAP, feature importance algorithm, and rule-based algorithms Experiments on benchmark datasets are used to compare these methods in regards to the explanation accuracy, consistency, computational efficiency and user interpretability. The results indicate that the combination of several XAI techniques can enhance the decision support system greatly by raising the level of transparency, user confidence and quality of decisions. SHAP based methodologies are more consistent and can be interpreted globally whereas LIME has local explanations that are able to be flexible and efficient. These improvements allow making more informed and correct decisions regarding such critical areas as healthcare and finance. The suggested research will contribute a systematic review method and practical expertise on how to select the appropriate XAI techniques and, therefore, enhance the development of a more transparent, credible and enhanced system of decision support.













