Skip to main content Skip to main navigation

Publikation

Explainable Boosting Machines for Network Intrusion Detection with Features Reduction

Tarek Elmihoub; Lars Nolle; Frederic Theodor Stahl
In: Max Bramer; Frederic Theodor Stahl (Hrsg.). Artificial Intelligence XXXIX. SGAI-AI 2022. SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence (AI-2022), December 13-15, Cambridge, United Kingdom, Pages 280-294, Lecture Notes in Computer Science (LNAI), Vol. 13652, ISBN 978-3-031-21440-0, Springer, Cham, Switzerland, 12/2022.

Zusammenfassung

Explainable Artificial Intelligence (XAI) can help in building trust in Artificial Intelligence (AI) models. XAI also helps in the development process of these models. Furthermore, they enable getting insight into problems with fundamen-tal incomplete specifications. Trust in AI models is crucial, especially when used in high-stakes domains. Network security is one of these domains, where AI models have established themself. Network intrusion attacks are amongst the most dangerous threats in the field of information security. Detection of these attacks can be viewed as a problem with incomplete specifications. Using AI models with XAI facilities, such as glass-box models, in tacking network intru-sion attacks can help in acquiring more knowledge about the problem and help to develop better models. In this paper, the use of Explainable Boosting Ma-chine (EBM) as a glass-box classifier for detecting network intrusions is inves-tigated. The performance of EBM is compared with other AI classifiers. The conducted experiments show that EBM outperforms its competitors in this do-main. The work also demonstrates that the explainability of EBMs can help re-ducing the number of features needed for detecting attacks without degrading the performance.