Knowledge Graphs For eXplainable AI

栏目: IT技术 · 发布时间: 5年前

内容简介:On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial IntelligenceMost of the available approaches to implement

On the Integration of Semantic Technologies and Symbolic Systems into Deep Learning Models for a More Comprehensible Artificial Intelligence

Knowledge Graphs For eXplainable AI

Schematic representation of an eXplainable AI system that integrates semantic technologies into deep learning models. The traditional pipeline of an AI system is depicted with the blue color. The Knowledge Matching process of deep learning components with Knowledge Graphs (KGs) and ontologies is depicted with orange color. Cross-Disciplinary and Interactive Explanations enabled by query and reasoning mechanisms are depicted with the red color.

D eep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement.

Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI , where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementations of symbolic AI — while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable.

Limits of current XAI and the opportunity of KGs

XAI is the field of research where mathematicians, computer scientists, and software engineers design, develop and test techniques for making AI systems more transparent and comprehensible by its stakeholders. Most of the approaches developed in this field require very specific technical expertise to manipulate algorithms that implement the mathematical functions at the roots of deep learning. Moreover, understanding this mathematical scaffolding is not enough to get insights into internal working models. In fact, in order to be more understandable, deep-learning-based systems should be able to emit and manipulate symbols , enabling user explanations on how a specific result is achieved.

In the context of symbolic systems, KGs and their underlying semantic technologies are a promising solution for the issue of understandability. In fact, these large networks of semantic entities and relationships provide a useful backbone for several reasoning mechanisms , ranging from consistency checking to causal inference . These reasoning procedures are enabled by ontologies , which provide a formal representation of semantic entities and relationships relevant to a specific sphere of knowledge.

The role of KGs for a better XAI

The implementations of symbolic systems based on semantic technologies are suitable to improve explanations for non-insiders. Input features, hidden layers and computational units, and predicted output of deep learning models can be mapped into entities of KGs or concepts and relationships of ontologies ( knowledge matching ). Traditionally, these ontology artifacts are the results of conceptualizations and practices adopted by experts from various disciplines, such as biology, finance, and law. As a consequence, they are very comprehensible to people with expertise in a specific domain ( cross-disciplinary explanations ), even if they do not have skills in AI technologies. Moreover, in the context of semantic technologies, KGs and ontologies are natively built to be queried and therefore they are able to provide answers to user requests ( interactive explanations ) and to provide a symbolic level to interpret the behavior and the results of a deep learning model.

Starting from these points there are specific trajectories for future work on XAI, including the exploitation of symbolic techniques to design novel deep neural architectures to natively encode explanations ; the development of multi-modal explanation models that are able to provide insights from different perspectives, combining visual and textual artifacts; the definition of a common explanation framework for the deep learning model comparison, based on KGs and ontologies, to enable proper validation strategies.

Reference

More information on this topic is available on our Journal article entitled “ On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI — Three Challenges for Future Research ”.


以上所述就是小编给大家介绍的《Knowledge Graphs For eXplainable AI》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

数码人类学

数码人类学

[英]丹尼尔·米勒、希瑟·A.霍斯特 / 王心远 / 人民出版社 / 2014-10 / 48.00元

人类学有两大任务,一是理解什么是人,二是理解人性是如何透过多元的文化表现出来。数码科技的蓬勃发展给这两者都带来了新的作用力。《数码人类学》向读者展示了人类与数码科技如何辩证地相互定义。最终我们试图得出一个结论,那便是“数码科技对人类到底意味着什么?” 从社交网站到数字化博物馆;从数字时代政治学到电子商务,浸润式的数码科技,给普通人的生活带来了根本性的改变。仅仅用数据来说明与理解问题显然过于太......一起来看看 《数码人类学》 这本书的介绍吧!

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

MD5 加密
MD5 加密

MD5 加密工具

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具