This is the Tensorflow code for our paper Patch-wise Attack for Fooling Deep Neural Network, and Pytorch version can be found at here.
In our paper, we propose a novel Patch-wise Iterative Method by using the amplification factor and guiding gradient to its feasible direction. Comparing with state-of-the-art attacks, we further improve the success rate by 3.7\% for normally trained models and 9.1\% for defense models on average. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods.
In targeted attack case, we extend our Patch-wise iterative method to Patch-wise++ iterative method. More details can be found from here.
Tensorflow 1.14, gast 0.2.2, Python3.7
Download the models
Normlly trained models (DenseNet can be found in here)
Then put these models into ".models/"
Run the code
python
python project_iter_attack.py
If you find this work is useful in your research, please consider citing:
@inproceedings{GaoZhang2020PatchWise,
author = {Lianli Gao and
Qilong Zhang and
Jingkuan Song and
Xianglong Liu and
Heng Tao Shen},
title = {Patch-Wise Attack for Fooling Deep Neural Network},
Booktitle = {European Conference on Computer Vision},
year = {2020}
}
adversarial-attacks adversarial-machine-learning adversarial-examples