Leveraging Explainable Machine Learning Models for Real-Time Threat Detection in Enterprise Networks

Authors

Franz Kafka Maria Rilke
Explainable AI Cybersecurity Specialist, United Kingdom.

Keywords:

Explainable AI (XAI), Cybersecurity, Threat Detection, Enterprise Networks, Intrusion Detection Systems (IDS), Interpretable Machine Learning, Real-Time Analysis

Synopsis

The proliferation of advanced cyber threats necessitates rapid, reliable, and interpretable detection mechanisms within enterprise networks. While traditional intrusion detection systems (IDS) offer a degree of security, they often fail in scalability, adaptability, and transparency. With the emergence of Explainable Artificial Intelligence (XAI), modern machine learning models can now be designed not only to detect threats but also to provide interpretable insights into their decision-making process. This paper investigates the integration of explainable machine learning (ML) models into real-time threat detection systems, analyzing their performance and trustworthiness in enterprise environments. We present a comparative analysis of interpretable ML techniques, their detection accuracy, and interpretability scores, and demonstrate the value of model transparency for cybersecurity teams in real-world scenarios.

 

 

References

(1) Lundberg, Scott M., and Su-In Lee. “A Unified Approach to Interpreting Model Predictions.” Advances in Neural Information Processing Systems, vol. 30, 2017.

(2) Vallemoni, R. Canonical Payment Data Models for Merchant Acquiring: Merchants, Terminals, Transactions, Fees, and Chargebacks. Int. J. Comput. Sci. Eng. (ISCSITR-IJCSE) 3(1), 42–66 (2022). https://doi.org/10.63397/ISCSITR-IJCSE_03_01_006

(3) Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.

(4) Zolanvari, Mohammad, Mehran Abedin Teixeira, Latha Gupta, and Raj Jain. “Explainable Machine Learning for Cyber Threat Detection in IoT Networks.” IEEE Access, vol. 8, 2020, pp. 181560–181573.

(5) Potla, R.B. (2023). Supplier Collaboration Portals for Component Manufacturers: Procure-to-Pay Automation and Working-Capital Outcomes. International Journal of Artificial Intelligence (ISCSITR-IJAI), 4(1), 16–40. https://doi.org/10.63397/ISCSITR-IJAI_04_01_002

(6) Moustafa, Nour, and Jill Slay. “UNSW-NB15: A Comprehensive Data Set for Network Intrusion Detection Systems (UNSW-NB15 Network Data Set).” 2015 Military Communications and Information Systems Conference (MilCIS), 2015.

(7) Tavallaee, Mahbod, Ebrahim Bagheri, Wei Lu, and Ali A. Ghorbani. “A Detailed Analysis of the KDD CUP 99 Data Set.” Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, 2009.

(8) Uppuluri, V. (2024). Real-Time Monitoring of Patient Adherence Using AI. Frontiers in Computer Science and Artificial Intelligence, 3(1), 59–68. https://doi.org/10.32996/fcsai.2024.3.1.7

(9) Kim, Gyeong-Min, Sang-Hyun Lee, and Sang Jin Kim. “A Novel Hybrid Intrusion Detection Method Integrating Anomaly Detection with Misuse Detection.” Expert Systems with Applications, vol. 41, no. 4, 2014, pp. 1690–1700.

(10) Shone, Nathan, Tran Nguyen Ngoc, Vu Dinh Phai, and Qi Shi. “A Deep Learning Approach to Network Intrusion Detection.” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 2, no. 1, 2018, pp. 41–50.

(11) Gundaboina, A. (2024). Automated patch management for endpoints: Ensuring compliance in healthcare and education sectors. International Journal of Computer Science and Information Technology Research, 5(2), 114–134. https://doi.org/10.63530/IJCSITR_2024_05_02_010

(12) Doshi-Velez, Finale, and Been Kim. “Towards a Rigorous Science of Interpretable Machine Learning.” arXiv preprint arXiv:1702.08608, 2017.

(13) Molnar, Christoph. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2nd ed., self-published, 2022.

(14) Carletti, Vincenzo, et al. “Evaluating the Effectiveness of Explainable AI in Cybersecurity: A User Study.” Proceedings of the 2022 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2022, pp. 151–160.

(15) Gundaboina, A. (2024). DevSecOps in Healthcare: Building Secure and Compliant Patient Engagement Applications. Journal of Artificial Intelligence, Machine Learning & Data Science, 2(4), 3052–3059. https://doi.org/10.51219/JAIMLD/anjan-gundaboina/62

(16) Guidotti, Riccardo, et al. “A Survey of Methods for Explaining Black Box Models.” ACM Computing Surveys (CSUR), vol. 51, no. 5, 2018, pp. 1–42.

(17) Ahmad, Iftikhar, et al. “Artificial Intelligence Techniques for Advanced Cybersecurity Threat Detection.” Future Generation Computer Systems, vol. 115, 2021, pp. 220–240.

Published

July 22, 2025