IBM Adversarial Robustness Toolbox Helps to Protect Neural Networks Against Malicious Attacks

栏目: IT技术 · 发布时间: 4年前

内容简介:IBM Adversarial Robustness Toolbox Helps to Protect Neural Networks Against Malicious AttacksThe open source framework provide best practices to defend deep neural networks against adversarial threats.Deep neural networks(DNNs) are rapidly achieving a leve

IBM Adversarial Robustness Toolbox Helps to Protect Neural Networks Against Malicious Attacks

The open source framework provide best practices to defend deep neural networks against adversarial threats.

Deep neural networks(DNNs) are rapidly achieving a level of sophistication that is dazzling even the most optimistic technologists in the space. However, with sophistication have come a lack of interpretability of DNN models and with that come risks. If we can’t understand what a DNN is doing, how can we possibly protect it against potential vulnerabilities. Not surprisingly, sophisticated DNNs have proven to even extremely vulnerable for the simple manipulations in the models and training datasets. Generative adversarial neural networks(GANs) have emerged as one of the fundamental techniques to attack DNNs. The raised of GAN-based attacks have forced machine learning specialists to regularly evaluate the robustness of DNN models. IBM has been one of the most active companies in this area and, about a year ago, compiled some of its findings into an open source framework known as the adversarial robustness toolbox .

Generative adversarial neural networks(GANs) are one of the most active areas of research in the deep learning ecosystem. Conceptually, GANs are a form of unsupervised learning in which two neural networks build knowledge by competing against each other in a zero-sum game. While GANs are a great mechanism for knowledge acquisition, they can also be used to generate attacks against deep neural networks. In a very well-known example, a GAN attacker can cause imperceptible changes in training images to trick a classification model.

The topic of evaluating the robustness of models against adversarial attacks have been a top priority of AI powerhouses such as OpenAI or Google. A bit under the radar, IBM has been doing a lot of work trying to advance the research and implementation of adversarial attacks in deep neural network. Just last week, IBM AI researchers published two different research papers in the area of GAN protection. Today, I would like to explore some of IBM’s recent work about protecting neural networks against adversarial attacks and discuss its relevance in modern deep learning implementation.

White-Box vs. Black-Box Attacks

Adversarial attacks against deep neural networks can be classified in two main groups: White-Box and Black-Box based on their knowledge of the model’s training policy. The white-box adversarial attacks describe scenarios in which the attacker has access to the underlying training policy network of the target model. The research found that even introducing small perturbations in the training policy can drastically affect the performance of the model.

Black-box adversarial attacks describe scenarios in which the attacker does not have complete access to the policy network. In AI research literary, black-box attacks are classified into two main groups:

1) The adversary has access to the training environment and knowledge of the training algorithm and hyperparameters. It knows the neural network architecture of the target policy network, but not its random initialization. They refer to this model as transferability across policies.

2) The adversary additionally has no knowledge of the training algorithm or hyperparameters. They refer to this model as transferability across algorithms.

A simpler way to think about white-box and black-box adversarial attacks is whether the attacker is targeting a model during training time or after is deployed. Despite that simple distinction, the techniques used to defend against white-box or black-box attack are fundamentally different. Recently, IBM has been dabbling into both attack models both from the research and implementation standpoint. Let’s take a look at some of IBM’s recent efforts in adversarial attacks.

Adversarial Robustness Toolbox

The Adversarial Robustness Toolbox(ART) is one of the most complete resources for evaluating the robustness of deep neural networks against adversarial attacks. Open sourced by IBM a few months ago, ART supports incorporates techniques to prevents adversarial attacks for deep neural networks written in TensorFlow, Keras, PyTorch and MxNet although support for more deep learning frameworks should be added shortly.

ART operates by examining and clustering the neural activations produced by a training dataset and trying to discriminate legit examples from those likely manipulated by an adversarial attack. The current version of ART focuses on two types of adversarial attacks: evasion and poisoning. For each type of adversarial attack, ART includes defense methods that can be incorporated into deep learning models.

Developers can start using ART via its Python SDK which doesn’t require any major modifications in the architecture of the deep neural network.

AutoZOOM

A large percentage of the adversarial attacks against deep neural networks are produced in a white-box setting in which the attacker has access to the network’s training policy. Black-Box attacks are both more challenging to implement as well as more difficult to defend against them. For instance, a black-box attack against an image classification model typically needs to execute a large number of queries against the model in order to find the correct adversarial images. Many times, those queries cause a performance degradation in the model that have nothing to do with the attack itself but with the network’s poor design for high volume queries. For instance, in the following image, the black-box adversarial attack takes more than 1 million queries to find the adversarial image. That level of computational resources are rarely available to attackers.

IBM’s Autoencoder-based Zeroth Order Optimization Method(AutoZOOM) is a technique for creating more efficient black-box attacks. Initially published in a research paper , AutoZOOM also includes an open source implementation that can be used by developers across several deep learning frameworks. The goal of AutoZOOM is to accelerate the efficiency of queries targeting adversarial examples and it accomplishes that using two main building blocks:

i. An adaptive random gradient estimation strategy to balance query counts and distortion.

ii. An autoencoder that is either trained offline with unlabeled data or a bilinear resizing operation for acceleration.

To achieve i, AutoZOOM features an optimized and query-efficient gradient estimator, which has an adaptive scheme that uses few queries to find the first successful adversarial perturbation and then uses more queries to fine-tune the distortion and make the adversarial example more realistic. To achieve ii, AutoZOOM implements a technique called “dimension reduction” to reduce the complexity of finding adversarial examples. The dimension reduction can be realized by an offline trained autoencoder to capture data characteristics or a simple bilinear image resizer which does not require any training.

The initial tests of AutoZOOM showed that the method is able to generate black-box adversarial examples with a far fewer queries than traditional methods.

CNN-Cert

As you can probably tell from the examples, convolutional neural networks(CNNs) used for image classification are among the top targets of adversarial attacks. However, many of the current defense techniques are not optimized for CNN architectures. Created by AI researchers of the Massachusetts Institute of Technology(MIT) and IBM, CNN-Cert is a framework for certifying the robustness of CNNs against adversarial attacks.

The key innovation in CNN-Cert is deriving explicit network output bound by considering the input/output relations of each building block. The activation layer can be general activations other than ReLU. The approach demonstrated to be about 11 to 17 times more efficient than traditional adversarial robustness certification methods. CNN-Cert is able to handle various architectures including convolutional layers, max-pooling layers, batch normalization layer, residual blocks, as well as general activation functions such as ReLU, tanh, sigmoid and arctan.

s you can see, IBM seems to be really committed to advance the conversation about adversarial attacks in deep neural networks. Efforts like ART, AutoZOOM and CNN-Cert are among the most creative recent efforts in adversarial techniques. Hopefully, we will see some of these implementations included in mainstream deep learning frameworks soon.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

C++程序设计语言

C++程序设计语言

Bjarne Stroustrup / 裘宗燕 / 机械工业出版社 / 2010-3-1 / 99.00元

本书是在C++语言和程序设计领域具有深远影响、畅销不衰的著作,由C++语言的设计者编写,对C++语言进行了最全面、最权威的论述,覆盖标准C++以及由C++所支持的关键性编程技术和设计技术。本书英文原版一经面世,即引起业内人士的高度评价和热烈欢迎,先后被翻译成德、希、匈、西、荷、法、日、俄、中、韩等近20种语言,数以百万计的程序员从中获益,是无可取代的C++经典力作。 在本书英文原版面世10年......一起来看看 《C++程序设计语言》 这本书的介绍吧!

随机密码生成器
随机密码生成器

多种字符组合密码

SHA 加密
SHA 加密

SHA 加密工具

HEX HSV 转换工具
HEX HSV 转换工具

HEX HSV 互换工具