Which Face is Real?

栏目: IT技术 · 发布时间: 4年前

内容简介:The paper that presents this The Adversarial Framework can be found“Computers are good, but your visual processing systems are even better. If you know what to look for, you can spot these fakes at a single glance — at least for the time being. The hardwar

The paper that presents this The Adversarial Framework can be found here along with the code used for the framework .

Which Face is Real?

Which Face Is Real? was developed by Jevin West and Carl Bergstrom from the University of Washington as part of the Calling Bullshit Project .

“Computers are good, but your visual processing systems are even better. If you know what to look for, you can spot these fakes at a single glance — at least for the time being. The hardware and software used to generate them will continue to improve, and it may be only a few years until humans fall behind in the arms race between forgery and detection.” — Jevin West and Carl Bergstrom

How Do You Tell the Difference Between the Images?

The differences are determined in 6 main areas:

Water splotches

  • The algorithm produces shiny blobs that look somewhat like water splotches on old photographic prints.
Which Face is Real?

Water Splotches

Hair

  • Disconnected strands, hair that’s too straight, or too streaked will be common problems when generating hair.
Which Face is Real?

Hair

Asymmetry

  • A common problem is asymmetry. Often the frame will take one style at the left and another at the right, or there will be a wayfarer-style ornament on one side but not on the other. Other times the frame will just be crooked or jagged. In addition, asymmetries in facial hair, different earrings in the left and right ear, and different forms of collar or fabric on the left and right side can be present.
Which Face is Real?

Asymmetry

Background problems

  • The background of the image may manifest in weird states like blurriness or misshapen objects. This is due to the fact that the neural net is trained on the face and less emphasis is given to the background of the image.
Which Face is Real?

Background Problems

Fluorescent bleed

  • Fluorescent colors can sometimes bleed into the hair or face from the background. And observers can mistake this for colored hair.
Which Face is Real?

Fluorescent Bleed

Teeth

  • Teeth are also hard to render and can come out as odd-shaped, asymmetric, or for those that can identify teeth, sometimes three incisors can appear in the image.
Which Face is Real?

Teeth

Testing Out the StyleGAN Algorithm

All the code for the StyleGAN has been open-sourced in the stylegan repository. It gives details on how you can run the styleGAN algorithm yourself. So let’s get started with sharing some of the basic system requirements.

System Requirements

  • Both Linux and Windows are supported, but we strongly recommend Linux for performance and compatibility reasons.
  • 64-bit Python 3.6 installation. We recommend Anaconda3 with numpy 1.14.3 or newer.
  • TensorFlow 1.10.0 or newer with GPU support.
  • One or more high-end NVIDIA GPUs with at least 11GB of DRAM. We recommend NVIDIA DGX-1 with 8 Tesla V100 GPUs.
  • NVIDIA driver 391.35 or newer, CUDA toolkit 9.0 or newer, cuDNN 7.3.1 or newer.

A minimal example to try a pre-trained example of the styleGAN is given in pretrained_example.py . It is executed as follows:

> python pretrained_example.py

Downloading https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ .... done



Gs                              Params OutputShape WeightShape

---                             --- --- ---

latents_in                      - (?, 512) -

...

images_out                      - (?, 3, 1024, 1024) -

---                             --- --- ---

Total                           26219627




Once you execute ‘python pretrained_example.py’, type in ‘ls results’ to see the results.



> ls results

example.png # https://drive.google.com/uc?id=1UDLT_zb-rof9kKH0GwiJW_bS9MoZi8oP

Prepare the Datasets for Training

The training and evaluation scripts operate on datasets stored as multi-resolution TFRecords. Each dataset is represented by a directory containing the same image data in several resolutions to enable efficient streaming. There is a separate *.tfrecords file for each resolution, and if the dataset contains labels, they are stored in a separate file as well. By default, the scripts expect to find the datasets at datasets/<NAME>/<NAME>-<RESOLUTION>.tfrecords . The directory can be changed by editing config.py :

result_dir = 'results'

data_dir = 'datasets'

cache_dir = 'cache'

To obtain the FFHQ dataset (datasets/ffhq), please refer to the Flickr-Faces-HQ repository .

To obtain the CelebA-HQ dataset (datasets/celebahq), please refer to the Progressive GAN repository .

To obtain other datasets, including LSUN, please consult their corresponding project pages. The datasets can be converted to multi-resolution TFRecords using the provided dataset_tool.py :

> python dataset_tool.py create_lsun datasets/lsun-bedroom-full ~/lsun/bedroom_lmdb --resolution 256

> python dataset_tool.py create_lsun_wide datasets/lsun-car-512x384 ~/lsun/car_lmdb --width 512 --height 384

> python dataset_tool.py create_lsun datasets/lsun-cat-full ~/lsun/cat_lmdb --resolution 256

> python dataset_tool.py create_cifar10 datasets/cifar10 ~/cifar10

> python dataset_tool.py create_from_images datasets/custom-dataset ~/custom-images

Training the StyleGAN Networks

Once the datasets are set up, you can train your own StyleGAN networks as follows:

  1. Edit train.py to specify the dataset and training configuration by uncommenting or editing specific lines.
  2. Run the training script with python train.py.
  3. The results are written to a newly created directory results/<ID>-<DESCRIPTION>.
  4. The training may take several days (or weeks) to complete, depending on the configuration.

By default, train.py is configured to train the highest-quality StyleGAN (configuration F in Table 1) for the FFHQ dataset at 1024×1024 resolution using 8 GPUs .

Expected StyleGAN Training Time

Below you will find NVIDIA’s reported expected training times for default configuration of the train.py script (available in the stylegan repository) on a Tesla V100 GPU for the FFHQ dataset (available in the stylegan repository).

Which Face is Real?

StyleGan Training Time

StyleGAN: Bringing it All Together

The algorithm behind this amazing app was the brainchild of Tero Karras, Samuli Laine and Timo Aila at NVIDIA and called it StyleGAN . The algorithm is based on earlier work by Ian Goodfellow and colleagues on General Adversarial Networks (GAN’s).

Generative models have a limitation in which it’s hard to control the characteristics such as facial features from photographs. NVIDIA’s StyleGAN is a fix to this limitation. The model allows the user to tune hyper-parameters that can control for the differences in the photographs.

StyleGAN solves the variability of photos by adding styles to images at each convolution layer. These styles represent different features of photos of a human, such as facial features, background color, hair, wrinkles, etc. The model generates two images A and B and then combines them by taking low-level features from A and the rest from B. At each level, different features (styles) are used to generate an image:

  • Coarse styles (resolution between 4 to 8) — pose, hair, face, shape
  • Middle styles (resolution between 16 to 32) — facial features, eyes
  • Fine styles (resolution between 64 to 1024)- color scheme

Which Face is Real?

Nvidia’s Neural Network

Have you tested out StyleGAN before? Or is this your first time? Let us know in the comment section below. We are always looking for new and creative ways from the community for any technologies or frameworks.

Other Helpful Resources for GANs and StyleGAN


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

数据结构与算法分析

数据结构与算法分析

(美)(C.A.谢弗)Clifford A.Shaffer / 电子工业出版社 / 1998-8 / 35.00元

本书综合“数据结构与算法”的知识梳理、习题解答及上机辅导等于一身;精心挑选了覆盖教学大纲的五百多道题目,并且提供所有题目的参考答案;对于较难的算法和上机题,给出了详细的分析和说明;对于学习的重点和难点、易犯的错误、题目的难易和重要性,以及国内教材的差异等都给出了必要的说明。 本书可给使用各种教材讲授和学习“数据结构与算法”(或者“数据结构”)的师生参考,是系统复习该课程和准备应考计算......一起来看看 《数据结构与算法分析》 这本书的介绍吧!

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换