Integrating Knowledge Representation and Logical Inference for Explainable Artificial Intelligence in High-Stakes Decision-Making
Keywords:
Explainable AI (XAI), Knowledge Representation, Logical Inference, High-Stakes Decisions, Symbolic AI, Decision TransparencySynopsis
High-stakes decision-making applications (e.g., healthcare, autonomous systems, legal judgement) require explainable artificial intelligence (XAI) that is both transparent and reliable. This paper explores how Knowledge Representation (KR) and Logical Inference (LI) integrate to support explainability in AI systems. We evaluate current methods, identify integration challenges, demonstrate experimental results using benchmark datasets, and discuss future directions. Results show that hybrid KR + LI approaches improve explanation quality and decision transparency compared to black-box models.
References
[1] Baader, F., Calvanese, D., McGuinness, D., Nardi, D., & Patel-Schneider, P. F. (2003). The Description Logic Handbook: Theory, Implementation, and Applications. Springer.
[2] Biran, O., & McKeown, K. (2017). Human-centric justification of black-box models. IJCAI, 10–15.
[3] Darwiche, A. (2009). Modeling and Reasoning with Bayesian Networks. Cambridge Univ. Press.
[4] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable ML. arXiv preprint.
[5] Uppuluri, V. (2018). The Future of Business Intelligence in Value-Based Care Models. Journal of Artificial Intelligence, Machine Learning & Data Science, 1(1), 3009–3015. https://doi.org/10.51219/JAIMLD/vijitha-uppuluri/623
[6] Garcez, A., Lamb, L., & Gabbay, D. (2008). Neural-Symbolic Cognitive Reasoning. Springer.
[7] Gruber, T. R. (1993). A translation approach to ontology. Knowledge Acquisition, 5(2), 199–220.
[8] Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. NIPS, 4768–4777.
[9] Potla, R.B. (2021). Blueprinting a Manufacturing Data Lakehouse: Harmonizing BOM, Routing, and Serialization Data for Advanced Analytics. International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences, 9(1), 1–12. https://doi.org/10.37082/IJIRMPS.v9.i1.232841
[10] Guidotti, R., Monreale, A., Ruggieri, S., et al. (2018). A survey of methods for explainable AI. ACM Comput. Surv., 51(5), 93.
[11] Hitzler, P., Krötzsch, M., & Rudolph, S. (2009). Foundations of Semantic Web Technologies. CRC Press.
Published
Categories
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.