Enhancing Interpretability of Machine Learning Models over Knowledge Graphs

verfasst von
Yashrajsinh Chudasama, Disha Purohit, Philipp D. Rohde, Maria Esther Vidal
Abstract

Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.

Organisationseinheit(en)
Forschungszentrum L3S
Institut für Data Science
Externe Organisation(en)
Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
Typ
Aufsatz in Konferenzband
Anzahl der Seiten
5
Publikationsdatum
2023
Publikationsstatus
Veröffentlicht
Peer-reviewed
Ja
ASJC Scopus Sachgebiete
Informatik (insg.)
Ziele für nachhaltige Entwicklung
SDG 3 – Gute Gesundheit und Wohlergehen
Elektronische Version(en)
https://doi.org/https://ceur-ws.org/Vol-3526/paper-05.pdf (Zugang: Offen)