Malware detection is a challenging application due to the rapid evolution of attack techniques, and traditional signature-based approaches struggle with the high volume of malware samples. Machine learning approaches face such limitation, but lack a clear interpretability, whereas interpretable models often underperform. This paper proposes to use Logic Explained Networks (LENs), a recently proposed class of interpretable neural networks that provide explanations using First-Order Logic rules, for malware detection. Applied to the EMBER dataset, LENs show robustness superior to traditional interpretable methods and performance comparable to black-box models. Additionally, we introduce a tailored LEN version improving the fidelity of logic-based explanations.

Logically explainable malware detection

Giannini, Francesco
;
2024

Abstract

Malware detection is a challenging application due to the rapid evolution of attack techniques, and traditional signature-based approaches struggle with the high volume of malware samples. Machine learning approaches face such limitation, but lack a clear interpretability, whereas interpretable models often underperform. This paper proposes to use Logic Explained Networks (LENs), a recently proposed class of interpretable neural networks that provide explanations using First-Order Logic rules, for malware detection. Applied to the EMBER dataset, LENs show robustness superior to traditional interpretable methods and performance comparable to black-box models. Additionally, we introduce a tailored LEN version improving the fidelity of logic-based explanations.
2024
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
Settore INFO-01/A - Informatica
HI-AI 2024, KDD Workshop on Human-Interpretable AI 2024
Barcellona
26 agosto 2024
KDD Workshop on Human-Interpretable AI 2024 : Proceedings of the KDD Workshop on Human-Interpretable AI 2024 co-located with 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2024) : Centre de Convencions Internacional de Barcelona, Spain, August 26, 2024
CEUR-WS
Explainable AI; First-Order Logic; Logic Explained Networks; Malware Detection;
File in questo prodotto:
File Dimensione Formato  
KDD - Logically Explainable Malware Detection.pdf

accesso aperto

Tipologia: Published version
Licenza: Creative Commons
Dimensione 1.14 MB
Formato Adobe PDF
1.14 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/149823
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact