Explainable Artificial Intelligence Models for Transparent and Accountable Decision Support Systems
Keywords:
XAI, Explainable AI, Decision Support Systems, Accountability, Transparency, AI Ethics, Interpretable Machine Learning, Responsible AISynopsis
As artificial intelligence (AI) continues to be embedded into critical decision-making infrastructures, the demand for explainability has become imperative. Explainable Artificial Intelligence (XAI) addresses this by enhancing transparency and accountability, particularly in decision support systems (DSS) operating in sensitive sectors like healthcare, finance, and law. This paper evaluates the current state of XAI models, emphasizing their interpretability, ethical implications, and domain-specific applications . Through a review of major pre-2024 literature and visualization of trends, we provide insight into how XAI enables stakeholders to make informed, fair, and accountable decisions.
References
[1] Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access. Link
[2] Gundaboina A. Data Loss Prevention in Healthcare: Advanced Strategies for Protecting PHI in Cloud Environments. Journal of Artificial Intelligence, Machine Learning and Data Science 2023 1(2), 3045-3051. DOI: doi.org/10.51219/JAIMLD/anjan-gundaboina/628
[3] Barredo Arrieta, A. et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion. Link
[4] Gundaboina, A. (2024). HITRUST Certification Best Practices: Streamlining Compliance for Healthcare Cloud Solutions. International Journal of Computer Science and Information Technology Research, 5(1), 76–94. https://ijcsitr.org/index.php/home/article/view/IJCSITR_2024_05_01_008
[5] Amann, J. et al. (2022). To explain or not to explain? AI explainability in clinical decision support systems. PLOS Digital Health. Link
[6] Uppuluri, V. (2023). Design and Deployment of Predictive Models for Influenza Breakthrough Infections Using Pharmacy Test Data. Journal of Artificial Intelligence, Machine Learning & Data Science, 1(2), 3031–3037. https://doi.org/10.51219/JAIMLD/vijitha-uppuluri/626
[7] Antoniadi, A.M. et al. (2021). XAI in machine learning-based clinical DSS: a systematic review. Applied Sciences. Link
[8] Ehsan, U. et al. (2021). Expanding explainability: Towards social transparency in AI systems. CHI. Link
[9] Potla, R.B. (2023). Supplier Collaboration Portals for Component Manufacturers: Procure-to-Pay Automation and Working-Capital Outcomes. International Journal of Artificial Intelligence (ISCSITR-IJAI), 4(1), 16–40. https://doi.org/10.63397/ISCSITR-IJAI_04_01_002
[10] Felzmann, H. et al. (2020). Towards transparency by design for AI. Science and Engineering Ethics. Link
[11] Minh, D. et al. (2022). Explainable artificial intelligence: a comprehensive review. Artificial Intelligence Review. Link
[12] London, A.J. (2019). Artificial intelligence and black-box medical decisions: Accuracy vs. explainability. Hastings Center Report. Link
[13] Vallemoni, R.K. (2023). Data Lineage and Metadata in Payment Ecosystems: Auditability and Regulatory Readiness across the Life Cycle. Frontiers in Computer Science and Artificial Intelligence, 2(1), 46–58. https://doi.org/10.32996/fcsai.2023.2.1.5
[14] Busuioc, M. (2021). Accountable AI: Holding algorithms to account. Public Administration Review. Link
[15] Vallemoni, R.K. (2023). Merchant Onboarding and Risk Scoring: Data Governance, Master Data, and Golden-Record Strategies. ISCSITR - International Journal of Scientific Research in Information Technology (ISCSITR-IJSRIT), 4(1), 16–41. https://doi.org/10.63397/ISCSITR-IJSRIT_04_01_002
Published
Series
Categories
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.