The Four Components of Trusted Artificial Intelligence

栏目: IT技术 · 发布时间: 4年前

内容简介:Building trust into AI systems is hard. How about establishing a fact sheet for AI systems?Trust and transparency are at the forefront of conversations related to artificial intelligence(AI) these days. While we intuitively understand the idea of trusting

The Four Components of Trusted Artificial Intelligence

Building trust into AI systems is hard. How about establishing a fact sheet for AI systems?

Trust and transparency are at the forefront of conversations related to artificial intelligence(AI) these days. While we intuitively understand the idea of trusting AI agents, we are still trying to figure out the specific mechanics to translate trust and transparency into programmatic constructs. After all, what does trust means in the context of an AI system?

Trust is a foundational building block of human socio-economic dynamics. In software development, during the last few decades, we steadily built mechanisms for asserting trust on specific applications. When we get on planes that fly on auto-pilot or cars completely driven by robots we are intrinsically expressing trust on the creators of a specific software application. In software, trust mechanisms are fundamentally based on the deterministic nature of most software applications in which their behavior is uniquely determine by the code workflow which makes it intrinsically predictable. The non-deterministic nature of artificial intelligence(AI) systems breaks the pattern of traditional software applications and introduces new dimensions to enable trust in AI agents. One of the most viable ideas that have been proposed for establishing trust in AI systems, came from IBM research in a famous research paper published over a year ago .

Trust is a dynamic derived from the process of minimizing risk. In software development, trust is built through mechanisms such as testability, auditability, documentation and many other elements that help establish the reputation of a piece of software. While all those mechanisms are relevant to AI systems, they are notoriously difficult to implement. In traditional software applications, their behavior is dictated by explicit rules expressed in the code; in the case of AI agents, their behavior is based on knowledge that evolves over time. The former approach is deterministic and predictable, the latter is non-deterministic and difficult to understand.

If we accept that AI is going to be a relevant part of our future, it is important to establish the foundations of trust in AI systems. Today, we regularly rely on AI models without having a clear understanding of their capabilities, knowledge or training processes. The concept of trust in AI systems remains highly subjective and hasn’t been incorporated as part of popular machine learning frameworks or platforms. What is AI trust and how can we measure it?

The Four Pillars of Trusted AI

Trust in human interaction is not only based on our interpretation of specific actions but it considers social knowledge built throughout centuries. We understand that a behavior is discriminatory not only by judging it on real time by also by factoring in a socially-accepted concept that discrimination is derogatory to human beings. How can we extrapolate these ideas to the world of artificial intelligence(AI). In their paper , the IBM team proposed four fundamental pillars to trusted AI:

· Fairness:AI systems should use training data and models that are free of bias, to avoid unfair treatment of certain groups .

· Robustness:AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.

· Explainability:AI systems should provide decisions or suggestions that can be understood by their users and developers.

· Lineage:AI systems should include details of their development, deployment, and maintenance so they can be audited throughout their lifecycle.

Fairness

AI fairness is typically associated with the minimization of bias in AI agents. Bias can be described as the mismatch between the training data distribution and a desired fair distribution. Unwanted bias in training data can result on unfair results. Establishing tests for identifying, curating and minimizing bias in training datasets should be a key element to establish fairness in AI systems. Obviously, fairness is more relevant in AI apps with a tangible social impact such as credit or legal applications.

Explainability

Understanding how AI models arrive to specific decisions is another key principle of trusted AI. Arriving to meaningful explanations about the knowledge of AI models reduces uncertainty and helps to quantify their accuracy. While explainability might be seen as an obvious factor to improve the trust in AI systems, its implementation is far from trivial. There is a natural tradeoff between the explainability of AI models and their accuracy. Highly explainable AI models tend to be very simple and, therefore, not incredibly accurate. From that perspective, establishing the right balance between explainability and accuracy is essential to improve the trust on an AI model.

Robustness

The concept of AI robustness is determined by two underlying factors: safety and security.

Safety

An AI system might be fair and explainable but still unsafe to use. AI safety is typically associated with the ability of an AI model to build knowledge that incorporates societal norms, policies, or regulations that correspond to well-established safe behaviors. Increasing the safety of AI models is another key element of trusted AI systems.

Security

AI models are highly susceptible to all sorts of attacks including many based on adversarial AI methods. The accuracy of AI models is directly correlated to their vulnerability to small perturbations on the input dataset. That relationship is often exploited by malicious actors that can try to alter specific datasets in order to alter/influence the behavior of an AI models. Testing and benchmarking AI models against adversarial attacks is key to establish trust in AI systems. IBM has been doing some interesting work in this area.

Lineage

AI models are constantly evolving making it challenging to trace its history. Establishing and tracking the provenance of training datasets, hyperparameter configurations and other metadata artifacts overtime is important to establish the lineage of an AI model. Understanding the lineage of AI models helps us establish trust from a historical perspective that is different to achieve by just factoring fairness, explainability and robustness alone.

A Factsheet for AI Systems

The subject of disclosures and transparency in AI systems is a very nascent area of research but one that is key to the mainstream adoption of AI. Just like we use information sheets for hardware appliances or nutrition labels in foods, we should consider establishing a factsheet for AI models. In their paper, IBM proposes a Supplier’s Declaration of Conformity (SDoC, or factsheet, for short) that helps to provide information about the four key pillars of trusted AI. IBM’s SDoC methodology should help answer basic questions about AI models such as the following:

· Does the dataset used to train the service have a datasheet or data statement?

· Was the dataset and model checked for biases? If “yes” describe bias policies that were checked, bias checking methods, and results.

· Was any bias mitigation performed on the dataset? If “yes” describe the mitigation method.

· Are algorithm outputs explainable/interpretable? If yes, explain how is explainability achieved (e.g. directly explainable algorithm, local explainability, explanations via examples).

· Describe the testing methodology.

· Was the service checked for robustness against adversarial attacks? If “yes” describe robustness policies that were checked, checking methods, and results.

· Is usage data from service operations retained/stored/kept?

The idea of establishing a factsheet for AI models is as simple as it is relevant to establish trusted AI systems. Some deep learning frameworks have been exploring enabling the building blocks of SDoC as part of their programming model in order to enable better levels of transparency. Enabling trust in deep learning systems is going to be a long journey but concepts such as IBM’s SDoC are a welcomed step in the right direction


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

算法神探

算法神探

[美] 杰瑞米·库比卡 / 啊哈磊、李嘉浩 / 电子工业出版社 / 2017-2 / 65

《算法神探:一部谷歌首席工程师写的CS小说》围绕程序设计典型算法,精心编织了一个扣人心弦又趣味横生的侦探缉凶故事。小说主人公运用高超的搜索技巧和精深的算法知识,最终识破阴谋、缉拿元凶。其间,用二分搜索搜查走私船、用搜索树跟踪间谍、用深度优先搜索逃离监狱、用优先队列开锁及用最佳优先搜索追寻线索等跌宕起伏又富含算法精要的情节,让读者在愉悦的沉浸式体验中快速提升境界,加深对程序世界的理解。《算法神探:一......一起来看看 《算法神探》 这本书的介绍吧!

MD5 加密
MD5 加密

MD5 加密工具

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具