Knowledge Graphs For eXplainable AI

栏目: IT技术 · 发布时间: 4年前

内容简介:On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial IntelligenceMost of the available approaches to implement

On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial Intelligence

Knowledge Graphs For eXplainable AI

Schematic representation of an eXplainable AI system that integrates semantic technologies into deep learning models. The traditional pipeline of an AI system is depicted with the blue color. The Knowledge Matching process of deep learning components with Knowledge Graphs (KGs) and ontologies is depicted with orange color. Cross-Disciplinary and Interactive Explanations enabled by query and reasoning mechanisms are depicted with the red color.

D eep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement.

Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI , where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementations of symbolic AI — while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable.

Limits of current XAI and the opportunity of KGs

XAI is the field of research where mathematicians, computer scientists, and software engineers design, develop and test techniques for making AI systems more transparent and comprehensible by its stakeholders. Most of the approaches developed in this field require very specific technical expertise to manipulate algorithms that implement the mathematical functions at the roots of deep learning. Moreover, understanding this mathematical scaffolding is not enough to get insights into internal working models. In fact, in order to be more understandable, deep-learning-based systems should be able to emit and manipulate symbols , enabling user explanations on how a specific result is achieved.

In the context of symbolic systems, KGs and their underlying semantic technologies are a promising solution for the issue of understandability. In fact, these large networks of semantic entities and relationships provide a useful backbone for several reasoning mechanisms , ranging from consistency checking to causal inference . These reasoning procedures are enabled by ontologies , which provide a formal representation of semantic entities and relationships relevant to a specific sphere of knowledge.

The role of KGs for a better XAI

The implementations of symbolic systems based on semantic technologies are suitable to improve explanations for non-insiders. Input features, hidden layers and computational units, and predicted output of deep learning models can be mapped into entities of KGs or concepts and relationships of ontologies ( knowledge matching ). Traditionally, these ontology artifacts are the results of conceptualizations and practices adopted by experts from various disciplines, such as biology, finance, and law. As a consequence, they are very comprehensible to people with expertise in a specific domain ( cross-disciplinary explanations ), even if they do not have skills in AI technologies. Moreover, in the context of semantic technologies, KGs and ontologies are natively built to be queried and therefore they are able to provide answers to user requests ( interactive explanations ) and to provide a symbolic level to interpret the behavior and the results of a deep learning model.

Starting from these points there are specific trajectories for future work on XAI, including the exploitation of symbolic techniques to design novel deep neural architectures to natively encode explanations ; the development of multi-modal explanation models that are able to provide insights from different perspectives, combining visual and textual artifacts; the definition of a common explanation framework for the deep learning model comparison, based on KGs and ontologies, to enable proper validation strategies.

Reference

More information on this topic is available on our Journal article entitled “ On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI — Three Challenges for Future Research ”.


以上所述就是小编给大家介绍的《Knowledge Graphs For eXplainable AI》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Beginning Java Objects中文版从概念到代码

Beginning Java Objects中文版从概念到代码

巴克 / 万波 / 人民邮电出版社 / 2007-1 / 78.00元

《Beginning Java Objects中文版从概念到代码(第2版)》是关于软件对象和Java的,但并不是纯粹地介绍Java语言,而是强调如何从对象模型转换到功能完整的Java应用程序。书中讲述了对象基础、对象建模和模型的实现。《Beginning Java Objects中文版从概念到代码(第2版)》除了用学生注册系统(SRS)示例贯穿全书之外,还在附录中给出三个附加的案例,这些案例是每章......一起来看看 《Beginning Java Objects中文版从概念到代码》 这本书的介绍吧!

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具

HEX CMYK 转换工具
HEX CMYK 转换工具

HEX CMYK 互转工具